2025-04-04 00:00:09.817411 | Job console starting... 2025-04-04 00:00:09.833926 | Updating repositories 2025-04-04 00:00:11.594789 | Preparing job workspace 2025-04-04 00:00:14.008175 | Running Ansible setup... 2025-04-04 00:00:21.417447 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-04-04 00:00:22.478098 | 2025-04-04 00:00:22.478216 | PLAY [Base pre] 2025-04-04 00:00:22.573749 | 2025-04-04 00:00:22.573862 | TASK [Setup log path fact] 2025-04-04 00:00:22.626775 | orchestrator | ok 2025-04-04 00:00:22.687763 | 2025-04-04 00:00:22.687874 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-04 00:00:22.748492 | orchestrator | ok 2025-04-04 00:00:22.772682 | 2025-04-04 00:00:22.772783 | TASK [emit-job-header : Print job information] 2025-04-04 00:00:22.842874 | # Job Information 2025-04-04 00:00:22.843025 | Ansible Version: 2.15.3 2025-04-04 00:00:22.843055 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-04-04 00:00:22.843079 | Pipeline: periodic-midnight 2025-04-04 00:00:22.843097 | Executor: 7d211f194f6a 2025-04-04 00:00:22.843112 | Triggered by: https://github.com/osism/testbed 2025-04-04 00:00:22.843128 | Event ID: be35e376974c4973be0adf5df5fc4d80 2025-04-04 00:00:22.849200 | 2025-04-04 00:00:22.849286 | LOOP [emit-job-header : Print node information] 2025-04-04 00:00:23.012015 | orchestrator | ok: 2025-04-04 00:00:23.012211 | orchestrator | # Node Information 2025-04-04 00:00:23.012242 | orchestrator | Inventory Hostname: orchestrator 2025-04-04 00:00:23.012262 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-04-04 00:00:23.012280 | orchestrator | Username: zuul-testbed01 2025-04-04 00:00:23.012296 | orchestrator | Distro: Debian 12.10 2025-04-04 00:00:23.012316 | orchestrator | Provider: static-testbed 2025-04-04 00:00:23.012333 | orchestrator | Label: testbed-orchestrator 2025-04-04 00:00:23.012350 | orchestrator | Product Name: OpenStack Nova 2025-04-04 00:00:23.012366 | orchestrator | Interface IP: 81.163.193.140 2025-04-04 00:00:23.032186 | 2025-04-04 00:00:23.032287 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-04-04 00:00:23.411764 | orchestrator -> localhost | changed 2025-04-04 00:00:23.420431 | 2025-04-04 00:00:23.420536 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-04-04 00:00:24.807490 | orchestrator -> localhost | changed 2025-04-04 00:00:24.840984 | 2025-04-04 00:00:24.841109 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-04-04 00:00:25.193586 | orchestrator -> localhost | ok 2025-04-04 00:00:25.200351 | 2025-04-04 00:00:25.200479 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-04-04 00:00:25.263656 | orchestrator | ok 2025-04-04 00:00:25.285455 | orchestrator | included: /var/lib/zuul/builds/404de2cc1cd54369b8b7ac59e13b3105/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-04-04 00:00:25.292539 | 2025-04-04 00:00:25.292619 | TASK [add-build-sshkey : Create Temp SSH key] 2025-04-04 00:00:26.653052 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-04-04 00:00:26.653221 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/404de2cc1cd54369b8b7ac59e13b3105/work/404de2cc1cd54369b8b7ac59e13b3105_id_rsa 2025-04-04 00:00:26.653254 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/404de2cc1cd54369b8b7ac59e13b3105/work/404de2cc1cd54369b8b7ac59e13b3105_id_rsa.pub 2025-04-04 00:00:26.653275 | orchestrator -> localhost | The key fingerprint is: 2025-04-04 00:00:26.653295 | orchestrator -> localhost | SHA256:qIwhHs6d0WyP9rKzQkNTesAEERjnWIbcxlKAM/rOh7Y zuul-build-sshkey 2025-04-04 00:00:26.653314 | orchestrator -> localhost | The key's randomart image is: 2025-04-04 00:00:26.653331 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-04-04 00:00:26.653348 | orchestrator -> localhost | |=B#o | 2025-04-04 00:00:26.653364 | orchestrator -> localhost | |*O * . | 2025-04-04 00:00:26.653386 | orchestrator -> localhost | |oo+ + | 2025-04-04 00:00:26.653402 | orchestrator -> localhost | |. +o. . | 2025-04-04 00:00:26.653418 | orchestrator -> localhost | |.oo.o+. S | 2025-04-04 00:00:26.653433 | orchestrator -> localhost | |+.+*+.o | 2025-04-04 00:00:26.653452 | orchestrator -> localhost | | *o++o . | 2025-04-04 00:00:26.653468 | orchestrator -> localhost | | =.oo. | 2025-04-04 00:00:26.653484 | orchestrator -> localhost | | .Eo.o=. | 2025-04-04 00:00:26.653500 | orchestrator -> localhost | +----[SHA256]-----+ 2025-04-04 00:00:26.653538 | orchestrator -> localhost | ok: Runtime: 0:00:00.426409 2025-04-04 00:00:26.660427 | 2025-04-04 00:00:26.660505 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-04-04 00:00:26.711553 | orchestrator | ok 2025-04-04 00:00:26.723407 | orchestrator | included: /var/lib/zuul/builds/404de2cc1cd54369b8b7ac59e13b3105/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-04-04 00:00:26.735249 | 2025-04-04 00:00:26.735336 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-04-04 00:00:26.779549 | orchestrator | skipping: Conditional result was False 2025-04-04 00:00:26.786311 | 2025-04-04 00:00:26.786391 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-04-04 00:00:27.384938 | orchestrator | changed 2025-04-04 00:00:27.392588 | 2025-04-04 00:00:27.392681 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-04-04 00:00:27.681587 | orchestrator | ok 2025-04-04 00:00:27.689099 | 2025-04-04 00:00:27.689192 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-04-04 00:00:28.111419 | orchestrator | ok 2025-04-04 00:00:28.117693 | 2025-04-04 00:00:28.117777 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-04-04 00:00:28.522341 | orchestrator | ok 2025-04-04 00:00:28.573502 | 2025-04-04 00:00:28.573589 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-04-04 00:00:28.587145 | orchestrator | skipping: Conditional result was False 2025-04-04 00:00:28.594681 | 2025-04-04 00:00:28.594762 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-04-04 00:00:29.204252 | orchestrator -> localhost | changed 2025-04-04 00:00:29.223683 | 2025-04-04 00:00:29.223774 | TASK [add-build-sshkey : Add back temp key] 2025-04-04 00:00:29.551991 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/404de2cc1cd54369b8b7ac59e13b3105/work/404de2cc1cd54369b8b7ac59e13b3105_id_rsa (zuul-build-sshkey) 2025-04-04 00:00:29.552166 | orchestrator -> localhost | ok: Runtime: 0:00:00.014723 2025-04-04 00:00:29.560073 | 2025-04-04 00:00:29.560155 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-04-04 00:00:30.025995 | orchestrator | ok 2025-04-04 00:00:30.031975 | 2025-04-04 00:00:30.032068 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-04-04 00:00:30.106572 | orchestrator | skipping: Conditional result was False 2025-04-04 00:00:30.129704 | 2025-04-04 00:00:30.129801 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-04-04 00:00:30.597226 | orchestrator | ok 2025-04-04 00:00:30.616977 | 2025-04-04 00:00:30.617081 | TASK [validate-host : Define zuul_info_dir fact] 2025-04-04 00:00:30.655249 | orchestrator | ok 2025-04-04 00:00:30.663522 | 2025-04-04 00:00:30.663603 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-04-04 00:00:31.134719 | orchestrator -> localhost | ok 2025-04-04 00:00:31.141534 | 2025-04-04 00:00:31.141613 | TASK [validate-host : Collect information about the host] 2025-04-04 00:00:32.249341 | orchestrator | ok 2025-04-04 00:00:32.267607 | 2025-04-04 00:00:32.267711 | TASK [validate-host : Sanitize hostname] 2025-04-04 00:00:32.336450 | orchestrator | ok 2025-04-04 00:00:32.346176 | 2025-04-04 00:00:32.346293 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-04-04 00:00:32.805005 | orchestrator -> localhost | changed 2025-04-04 00:00:32.811923 | 2025-04-04 00:00:32.812028 | TASK [validate-host : Collect information about zuul worker] 2025-04-04 00:00:33.331108 | orchestrator | ok 2025-04-04 00:00:33.338213 | 2025-04-04 00:00:33.338305 | TASK [validate-host : Write out all zuul information for each host] 2025-04-04 00:00:33.864545 | orchestrator -> localhost | changed 2025-04-04 00:00:33.875548 | 2025-04-04 00:00:33.875633 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-04-04 00:00:34.179717 | orchestrator | ok 2025-04-04 00:00:34.186922 | 2025-04-04 00:00:34.187026 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-04-04 00:00:55.520614 | orchestrator | changed: 2025-04-04 00:00:55.520865 | orchestrator | .d..t...... src/ 2025-04-04 00:00:55.520907 | orchestrator | .d..t...... src/github.com/ 2025-04-04 00:00:55.520932 | orchestrator | .d..t...... src/github.com/osism/ 2025-04-04 00:00:55.520953 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-04-04 00:00:55.520995 | orchestrator | RedHat.yml 2025-04-04 00:00:55.535722 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-04-04 00:00:55.535739 | orchestrator | RedHat.yml 2025-04-04 00:00:55.535792 | orchestrator | = 1.53.0"... 2025-04-04 00:01:07.775740 | orchestrator | 00:01:07.775 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-04-04 00:01:09.026485 | orchestrator | 00:01:09.026 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-04-04 00:01:10.117124 | orchestrator | 00:01:10.116 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-04-04 00:01:11.058490 | orchestrator | 00:01:11.058 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-04-04 00:01:13.220171 | orchestrator | 00:01:13.219 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-04-04 00:01:14.441477 | orchestrator | 00:01:14.441 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-04-04 00:01:15.378635 | orchestrator | 00:01:15.378 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-04-04 00:01:15.378849 | orchestrator | 00:01:15.378 STDOUT terraform: Providers are signed by their developers. 2025-04-04 00:01:15.379083 | orchestrator | 00:01:15.378 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-04-04 00:01:15.379095 | orchestrator | 00:01:15.378 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-04-04 00:01:15.379103 | orchestrator | 00:01:15.378 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-04-04 00:01:15.379497 | orchestrator | 00:01:15.378 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-04-04 00:01:15.379512 | orchestrator | 00:01:15.378 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-04-04 00:01:15.379517 | orchestrator | 00:01:15.379 STDOUT terraform: you run "tofu init" in the future. 2025-04-04 00:01:15.379526 | orchestrator | 00:01:15.379 STDOUT terraform: OpenTofu has been successfully initialized! 2025-04-04 00:01:15.379812 | orchestrator | 00:01:15.379 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-04-04 00:01:15.994442 | orchestrator | 00:01:15.379 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-04-04 00:01:15.994570 | orchestrator | 00:01:15.379 STDOUT terraform: should now work. 2025-04-04 00:01:15.994595 | orchestrator | 00:01:15.379 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-04-04 00:01:15.994609 | orchestrator | 00:01:15.379 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-04-04 00:01:15.994623 | orchestrator | 00:01:15.379 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-04-04 00:01:15.994668 | orchestrator | 00:01:15.994 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-04-04 00:01:16.163420 | orchestrator | 00:01:16.163 STDOUT terraform: Created and switched to workspace "ci"! 2025-04-04 00:01:16.163492 | orchestrator | 00:01:16.163 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-04-04 00:01:16.363470 | orchestrator | 00:01:16.163 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-04-04 00:01:16.363567 | orchestrator | 00:01:16.163 STDOUT terraform: for this configuration. 2025-04-04 00:01:16.363609 | orchestrator | 00:01:16.363 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-04-04 00:01:16.451638 | orchestrator | 00:01:16.451 STDOUT terraform: ci.auto.tfvars 2025-04-04 00:01:16.621634 | orchestrator | 00:01:16.621 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-04-04 00:01:17.435087 | orchestrator | 00:01:17.433 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-04-04 00:01:17.950221 | orchestrator | 00:01:17.949 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-04-04 00:01:18.137110 | orchestrator | 00:01:18.136 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-04-04 00:01:18.137195 | orchestrator | 00:01:18.137 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-04-04 00:01:18.137324 | orchestrator | 00:01:18.137 STDOUT terraform:  + create 2025-04-04 00:01:18.137351 | orchestrator | 00:01:18.137 STDOUT terraform:  <= read (data resources) 2025-04-04 00:01:18.137366 | orchestrator | 00:01:18.137 STDOUT terraform: OpenTofu will perform the following actions: 2025-04-04 00:01:18.137427 | orchestrator | 00:01:18.137 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-04-04 00:01:18.137508 | orchestrator | 00:01:18.137 STDOUT terraform:  # (config refers to values not yet known) 2025-04-04 00:01:18.137586 | orchestrator | 00:01:18.137 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-04-04 00:01:18.137647 | orchestrator | 00:01:18.137 STDOUT terraform:  + checksum = (known after apply) 2025-04-04 00:01:18.137717 | orchestrator | 00:01:18.137 STDOUT terraform:  + created_at = (known after apply) 2025-04-04 00:01:18.137794 | orchestrator | 00:01:18.137 STDOUT terraform:  + file = (known after apply) 2025-04-04 00:01:18.137881 | orchestrator | 00:01:18.137 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.137932 | orchestrator | 00:01:18.137 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.138008 | orchestrator | 00:01:18.137 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-04 00:01:18.138101 | orchestrator | 00:01:18.137 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-04 00:01:18.138153 | orchestrator | 00:01:18.138 STDOUT terraform:  + most_recent = true 2025-04-04 00:01:18.138232 | orchestrator | 00:01:18.138 STDOUT terraform:  + name = (known after apply) 2025-04-04 00:01:18.138321 | orchestrator | 00:01:18.138 STDOUT terraform:  + protected = (known after apply) 2025-04-04 00:01:18.138370 | orchestrator | 00:01:18.138 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.138440 | orchestrator | 00:01:18.138 STDOUT terraform:  + schema = (known after apply) 2025-04-04 00:01:18.138527 | orchestrator | 00:01:18.138 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-04 00:01:18.138593 | orchestrator | 00:01:18.138 STDOUT terraform:  + tags = (known after apply) 2025-04-04 00:01:18.138657 | orchestrator | 00:01:18.138 STDOUT terraform:  + updated_at = (known after apply) 2025-04-04 00:01:18.138672 | orchestrator | 00:01:18.138 STDOUT terraform:  } 2025-04-04 00:01:18.138855 | orchestrator | 00:01:18.138 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-04-04 00:01:18.138931 | orchestrator | 00:01:18.138 STDOUT terraform:  # (config refers to values not yet known) 2025-04-04 00:01:18.138943 | orchestrator | 00:01:18.138 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-04-04 00:01:18.139001 | orchestrator | 00:01:18.138 STDOUT terraform:  + checksum = (known after apply) 2025-04-04 00:01:18.139077 | orchestrator | 00:01:18.138 STDOUT terraform:  + created_at = (known after apply) 2025-04-04 00:01:18.139148 | orchestrator | 00:01:18.139 STDOUT terraform:  + file = (known after apply) 2025-04-04 00:01:18.139219 | orchestrator | 00:01:18.139 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.139304 | orchestrator | 00:01:18.139 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.139410 | orchestrator | 00:01:18.139 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-04 00:01:18.139486 | orchestrator | 00:01:18.139 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-04 00:01:18.139545 | orchestrator | 00:01:18.139 STDOUT terraform:  + most_recent = true 2025-04-04 00:01:18.139604 | orchestrator | 00:01:18.139 STDOUT terraform:  + name = (known after apply) 2025-04-04 00:01:18.139694 | orchestrator | 00:01:18.139 STDOUT terraform:  + protected = (known after apply) 2025-04-04 00:01:18.139737 | orchestrator | 00:01:18.139 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.139809 | orchestrator | 00:01:18.139 STDOUT terraform:  + schema = (known after apply) 2025-04-04 00:01:18.139882 | orchestrator | 00:01:18.139 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-04 00:01:18.139952 | orchestrator | 00:01:18.139 STDOUT terraform:  + tags = (known after apply) 2025-04-04 00:01:18.140008 | orchestrator | 00:01:18.139 STDOUT terraform:  + updated_at = (known after apply) 2025-04-04 00:01:18.140026 | orchestrator | 00:01:18.139 STDOUT terraform:  } 2025-04-04 00:01:18.140104 | orchestrator | 00:01:18.140 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-04-04 00:01:18.140173 | orchestrator | 00:01:18.140 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-04-04 00:01:18.140273 | orchestrator | 00:01:18.140 STDOUT terraform:  + content = (known after apply) 2025-04-04 00:01:18.140361 | orchestrator | 00:01:18.140 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-04 00:01:18.140446 | orchestrator | 00:01:18.140 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-04 00:01:18.140534 | orchestrator | 00:01:18.140 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-04 00:01:18.140637 | orchestrator | 00:01:18.140 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-04 00:01:18.140726 | orchestrator | 00:01:18.140 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-04 00:01:18.140795 | orchestrator | 00:01:18.140 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-04 00:01:18.140837 | orchestrator | 00:01:18.140 STDOUT terraform:  + directory_permission = "0777" 2025-04-04 00:01:18.140925 | orchestrator | 00:01:18.140 STDOUT terraform:  + file_permission = "0644" 2025-04-04 00:01:18.141016 | orchestrator | 00:01:18.140 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-04-04 00:01:18.141088 | orchestrator | 00:01:18.140 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.141163 | orchestrator | 00:01:18.141 STDOUT terraform:  } 2025-04-04 00:01:18.141195 | orchestrator | 00:01:18.141 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-04-04 00:01:18.141211 | orchestrator | 00:01:18.141 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-04-04 00:01:18.141340 | orchestrator | 00:01:18.141 STDOUT terraform:  + content = (known after apply) 2025-04-04 00:01:18.141438 | orchestrator | 00:01:18.141 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-04 00:01:18.141505 | orchestrator | 00:01:18.141 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-04 00:01:18.141576 | orchestrator | 00:01:18.141 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-04 00:01:18.141659 | orchestrator | 00:01:18.141 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-04 00:01:18.141742 | orchestrator | 00:01:18.141 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-04 00:01:18.141832 | orchestrator | 00:01:18.141 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-04 00:01:18.141877 | orchestrator | 00:01:18.141 STDOUT terraform:  + directory_permission = "0777" 2025-04-04 00:01:18.141933 | orchestrator | 00:01:18.141 STDOUT terraform:  + file_permission = "0644" 2025-04-04 00:01:18.142036 | orchestrator | 00:01:18.141 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-04-04 00:01:18.142117 | orchestrator | 00:01:18.141 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.142135 | orchestrator | 00:01:18.142 STDOUT terraform:  } 2025-04-04 00:01:18.142220 | orchestrator | 00:01:18.142 STDOUT terraform:  # local_file.inventory will be created 2025-04-04 00:01:18.142275 | orchestrator | 00:01:18.142 STDOUT terraform:  + resource "local_file" "inventory" { 2025-04-04 00:01:18.142371 | orchestrator | 00:01:18.142 STDOUT terraform:  + content = (known after apply) 2025-04-04 00:01:18.142438 | orchestrator | 00:01:18.142 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-04 00:01:18.142529 | orchestrator | 00:01:18.142 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-04 00:01:18.142618 | orchestrator | 00:01:18.142 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-04 00:01:18.142700 | orchestrator | 00:01:18.142 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-04 00:01:18.142782 | orchestrator | 00:01:18.142 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-04 00:01:18.142865 | orchestrator | 00:01:18.142 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-04 00:01:18.142921 | orchestrator | 00:01:18.142 STDOUT terraform:  + directory_permission = "0777" 2025-04-04 00:01:18.142977 | orchestrator | 00:01:18.142 STDOUT terraform:  + file_permission = "0644" 2025-04-04 00:01:18.143054 | orchestrator | 00:01:18.142 STDOUT terraform:  + filename = "inventory.ci" 2025-04-04 00:01:18.143138 | orchestrator | 00:01:18.143 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.143168 | orchestrator | 00:01:18.143 STDOUT terraform:  } 2025-04-04 00:01:18.143238 | orchestrator | 00:01:18.143 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-04-04 00:01:18.143349 | orchestrator | 00:01:18.143 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-04-04 00:01:18.143425 | orchestrator | 00:01:18.143 STDOUT terraform:  + content = (sensitive value) 2025-04-04 00:01:18.143508 | orchestrator | 00:01:18.143 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-04 00:01:18.143596 | orchestrator | 00:01:18.143 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-04 00:01:18.143664 | orchestrator | 00:01:18.143 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-04 00:01:18.143738 | orchestrator | 00:01:18.143 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-04 00:01:18.143803 | orchestrator | 00:01:18.143 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-04 00:01:18.143871 | orchestrator | 00:01:18.143 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-04 00:01:18.143919 | orchestrator | 00:01:18.143 STDOUT terraform:  + directory_permission = "0700" 2025-04-04 00:01:18.143967 | orchestrator | 00:01:18.143 STDOUT terraform:  + file_permission = "0600" 2025-04-04 00:01:18.144026 | orchestrator | 00:01:18.143 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-04-04 00:01:18.144099 | orchestrator | 00:01:18.144 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.144113 | orchestrator | 00:01:18.144 STDOUT terraform:  } 2025-04-04 00:01:18.144175 | orchestrator | 00:01:18.144 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-04-04 00:01:18.144234 | orchestrator | 00:01:18.144 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-04-04 00:01:18.144285 | orchestrator | 00:01:18.144 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.144300 | orchestrator | 00:01:18.144 STDOUT terraform:  } 2025-04-04 00:01:18.144402 | orchestrator | 00:01:18.144 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-04-04 00:01:18.144496 | orchestrator | 00:01:18.144 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-04-04 00:01:18.144556 | orchestrator | 00:01:18.144 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.144597 | orchestrator | 00:01:18.144 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.144667 | orchestrator | 00:01:18.144 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.144725 | orchestrator | 00:01:18.144 STDOUT terraform:  + image_id = (known after apply) 2025-04-04 00:01:18.144785 | orchestrator | 00:01:18.144 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.144863 | orchestrator | 00:01:18.144 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-04-04 00:01:18.144923 | orchestrator | 00:01:18.144 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.144964 | orchestrator | 00:01:18.144 STDOUT terraform:  + size = 80 2025-04-04 00:01:18.145005 | orchestrator | 00:01:18.144 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.145018 | orchestrator | 00:01:18.144 STDOUT terraform:  } 2025-04-04 00:01:18.145118 | orchestrator | 00:01:18.145 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-04-04 00:01:18.145209 | orchestrator | 00:01:18.145 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-04 00:01:18.145281 | orchestrator | 00:01:18.145 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.145389 | orchestrator | 00:01:18.145 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.145450 | orchestrator | 00:01:18.145 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.145469 | orchestrator | 00:01:18.145 STDOUT terraform:  + image_id = (known after apply) 2025-04-04 00:01:18.145485 | orchestrator | 00:01:18.145 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.145557 | orchestrator | 00:01:18.145 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-04-04 00:01:18.145608 | orchestrator | 00:01:18.145 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.145639 | orchestrator | 00:01:18.145 STDOUT terraform:  + size = 80 2025-04-04 00:01:18.145669 | orchestrator | 00:01:18.145 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.145683 | orchestrator | 00:01:18.145 STDOUT terraform:  } 2025-04-04 00:01:18.145764 | orchestrator | 00:01:18.145 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-04-04 00:01:18.145841 | orchestrator | 00:01:18.145 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-04 00:01:18.145892 | orchestrator | 00:01:18.145 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.145921 | orchestrator | 00:01:18.145 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.145975 | orchestrator | 00:01:18.145 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.146024 | orchestrator | 00:01:18.145 STDOUT terraform:  + image_id = (known after apply) 2025-04-04 00:01:18.146093 | orchestrator | 00:01:18.146 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.146157 | orchestrator | 00:01:18.146 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-04-04 00:01:18.146215 | orchestrator | 00:01:18.146 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.146239 | orchestrator | 00:01:18.146 STDOUT terraform:  + size = 80 2025-04-04 00:01:18.146294 | orchestrator | 00:01:18.146 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.146309 | orchestrator | 00:01:18.146 STDOUT terraform:  } 2025-04-04 00:01:18.146387 | orchestrator | 00:01:18.146 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-04-04 00:01:18.146463 | orchestrator | 00:01:18.146 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-04 00:01:18.146515 | orchestrator | 00:01:18.146 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.146550 | orchestrator | 00:01:18.146 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.146602 | orchestrator | 00:01:18.146 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.146654 | orchestrator | 00:01:18.146 STDOUT terraform:  + image_id = (known after apply) 2025-04-04 00:01:18.146707 | orchestrator | 00:01:18.146 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.146771 | orchestrator | 00:01:18.146 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-04-04 00:01:18.146823 | orchestrator | 00:01:18.146 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.146857 | orchestrator | 00:01:18.146 STDOUT terraform:  + size = 80 2025-04-04 00:01:18.146893 | orchestrator | 00:01:18.146 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.146907 | orchestrator | 00:01:18.146 STDOUT terraform:  } 2025-04-04 00:01:18.146988 | orchestrator | 00:01:18.146 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-04-04 00:01:18.147066 | orchestrator | 00:01:18.146 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-04 00:01:18.147116 | orchestrator | 00:01:18.147 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.147149 | orchestrator | 00:01:18.147 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.147202 | orchestrator | 00:01:18.147 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.147307 | orchestrator | 00:01:18.147 STDOUT terraform:  + image_id = (known after apply) 2025-04-04 00:01:18.147347 | orchestrator | 00:01:18.147 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.147401 | orchestrator | 00:01:18.147 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-04-04 00:01:18.147429 | orchestrator | 00:01:18.147 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.147442 | orchestrator | 00:01:18.147 STDOUT terraform:  + size = 80 2025-04-04 00:01:18.147467 | orchestrator | 00:01:18.147 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.147479 | orchestrator | 00:01:18.147 STDOUT terraform:  } 2025-04-04 00:01:18.147557 | orchestrator | 00:01:18.147 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-04-04 00:01:18.147595 | orchestrator | 00:01:18.147 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-04 00:01:18.147621 | orchestrator | 00:01:18.147 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.147641 | orchestrator | 00:01:18.147 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.147653 | orchestrator | 00:01:18.147 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.147683 | orchestrator | 00:01:18.147 STDOUT terraform:  + image_id = (known after apply) 2025-04-04 00:01:18.147708 | orchestrator | 00:01:18.147 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.147741 | orchestrator | 00:01:18.147 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-04-04 00:01:18.147766 | orchestrator | 00:01:18.147 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.147779 | orchestrator | 00:01:18.147 STDOUT terraform:  + size = 80 2025-04-04 00:01:18.147791 | orchestrator | 00:01:18.147 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.147803 | orchestrator | 00:01:18.147 STDOUT terraform:  } 2025-04-04 00:01:18.147840 | orchestrator | 00:01:18.147 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-04-04 00:01:18.147878 | orchestrator | 00:01:18.147 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-04 00:01:18.147902 | orchestrator | 00:01:18.147 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.147915 | orchestrator | 00:01:18.147 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.147942 | orchestrator | 00:01:18.147 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.147967 | orchestrator | 00:01:18.147 STDOUT terraform:  + image_id = (known after apply) 2025-04-04 00:01:18.147992 | orchestrator | 00:01:18.147 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.148026 | orchestrator | 00:01:18.147 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-04-04 00:01:18.148050 | orchestrator | 00:01:18.148 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.148063 | orchestrator | 00:01:18.148 STDOUT terraform:  + size = 80 2025-04-04 00:01:18.148076 | orchestrator | 00:01:18.148 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.148089 | orchestrator | 00:01:18.148 STDOUT terraform:  } 2025-04-04 00:01:18.148122 | orchestrator | 00:01:18.148 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-04-04 00:01:18.148157 | orchestrator | 00:01:18.148 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.148182 | orchestrator | 00:01:18.148 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.148194 | orchestrator | 00:01:18.148 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.148224 | orchestrator | 00:01:18.148 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.148246 | orchestrator | 00:01:18.148 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.148295 | orchestrator | 00:01:18.148 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-04-04 00:01:18.148320 | orchestrator | 00:01:18.148 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.148334 | orchestrator | 00:01:18.148 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.148352 | orchestrator | 00:01:18.148 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.148391 | orchestrator | 00:01:18.148 STDOUT terraform:  } 2025-04-04 00:01:18.148404 | orchestrator | 00:01:18.148 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-04-04 00:01:18.148428 | orchestrator | 00:01:18.148 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.148453 | orchestrator | 00:01:18.148 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.148466 | orchestrator | 00:01:18.148 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.148491 | orchestrator | 00:01:18.148 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.148516 | orchestrator | 00:01:18.148 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.148547 | orchestrator | 00:01:18.148 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-04-04 00:01:18.148572 | orchestrator | 00:01:18.148 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.148585 | orchestrator | 00:01:18.148 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.148597 | orchestrator | 00:01:18.148 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.148610 | orchestrator | 00:01:18.148 STDOUT terraform:  } 2025-04-04 00:01:18.148644 | orchestrator | 00:01:18.148 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-04-04 00:01:18.148680 | orchestrator | 00:01:18.148 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.148705 | orchestrator | 00:01:18.148 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.148718 | orchestrator | 00:01:18.148 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.148745 | orchestrator | 00:01:18.148 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.148773 | orchestrator | 00:01:18.148 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.148801 | orchestrator | 00:01:18.148 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-04-04 00:01:18.148826 | orchestrator | 00:01:18.148 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.148839 | orchestrator | 00:01:18.148 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.148852 | orchestrator | 00:01:18.148 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.148864 | orchestrator | 00:01:18.148 STDOUT terraform:  } 2025-04-04 00:01:18.148897 | orchestrator | 00:01:18.148 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-04-04 00:01:18.148931 | orchestrator | 00:01:18.148 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.148957 | orchestrator | 00:01:18.148 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.148970 | orchestrator | 00:01:18.148 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.149006 | orchestrator | 00:01:18.148 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.149030 | orchestrator | 00:01:18.148 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.149062 | orchestrator | 00:01:18.149 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-04-04 00:01:18.149087 | orchestrator | 00:01:18.149 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.149100 | orchestrator | 00:01:18.149 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.149112 | orchestrator | 00:01:18.149 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.149125 | orchestrator | 00:01:18.149 STDOUT terraform:  } 2025-04-04 00:01:18.149158 | orchestrator | 00:01:18.149 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-04-04 00:01:18.149193 | orchestrator | 00:01:18.149 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.149219 | orchestrator | 00:01:18.149 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.149232 | orchestrator | 00:01:18.149 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.149292 | orchestrator | 00:01:18.149 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.149316 | orchestrator | 00:01:18.149 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.149329 | orchestrator | 00:01:18.149 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-04-04 00:01:18.149341 | orchestrator | 00:01:18.149 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.149353 | orchestrator | 00:01:18.149 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.149366 | orchestrator | 00:01:18.149 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.149378 | orchestrator | 00:01:18.149 STDOUT terraform:  } 2025-04-04 00:01:18.149410 | orchestrator | 00:01:18.149 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-04-04 00:01:18.149445 | orchestrator | 00:01:18.149 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.149470 | orchestrator | 00:01:18.149 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.149483 | orchestrator | 00:01:18.149 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.149509 | orchestrator | 00:01:18.149 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.149536 | orchestrator | 00:01:18.149 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.149564 | orchestrator | 00:01:18.149 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-04-04 00:01:18.149589 | orchestrator | 00:01:18.149 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.149601 | orchestrator | 00:01:18.149 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.149614 | orchestrator | 00:01:18.149 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.149627 | orchestrator | 00:01:18.149 STDOUT terraform:  } 2025-04-04 00:01:18.149660 | orchestrator | 00:01:18.149 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-04-04 00:01:18.149694 | orchestrator | 00:01:18.149 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.149719 | orchestrator | 00:01:18.149 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.149735 | orchestrator | 00:01:18.149 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.149757 | orchestrator | 00:01:18.149 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.149782 | orchestrator | 00:01:18.149 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.149813 | orchestrator | 00:01:18.149 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-04-04 00:01:18.149838 | orchestrator | 00:01:18.149 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.149849 | orchestrator | 00:01:18.149 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.149870 | orchestrator | 00:01:18.149 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.149910 | orchestrator | 00:01:18.149 STDOUT terraform:  } 2025-04-04 00:01:18.149921 | orchestrator | 00:01:18.149 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-04-04 00:01:18.149946 | orchestrator | 00:01:18.149 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.149971 | orchestrator | 00:01:18.149 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.149982 | orchestrator | 00:01:18.149 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.150009 | orchestrator | 00:01:18.149 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.150050 | orchestrator | 00:01:18.150 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.150081 | orchestrator | 00:01:18.150 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-04-04 00:01:18.150106 | orchestrator | 00:01:18.150 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.150117 | orchestrator | 00:01:18.150 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.150137 | orchestrator | 00:01:18.150 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.150148 | orchestrator | 00:01:18.150 STDOUT terraform:  } 2025-04-04 00:01:18.150181 | orchestrator | 00:01:18.150 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-04-04 00:01:18.150216 | orchestrator | 00:01:18.150 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.150241 | orchestrator | 00:01:18.150 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.150268 | orchestrator | 00:01:18.150 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.150280 | orchestrator | 00:01:18.150 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.150308 | orchestrator | 00:01:18.150 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.150338 | orchestrator | 00:01:18.150 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-04-04 00:01:18.150363 | orchestrator | 00:01:18.150 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.150374 | orchestrator | 00:01:18.150 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.150393 | orchestrator | 00:01:18.150 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.150404 | orchestrator | 00:01:18.150 STDOUT terraform:  } 2025-04-04 00:01:18.150438 | orchestrator | 00:01:18.150 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-04-04 00:01:18.150473 | orchestrator | 00:01:18.150 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.150497 | orchestrator | 00:01:18.150 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.150508 | orchestrator | 00:01:18.150 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.150537 | orchestrator | 00:01:18.150 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.150562 | orchestrator | 00:01:18.150 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.150592 | orchestrator | 00:01:18.150 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-04-04 00:01:18.150616 | orchestrator | 00:01:18.150 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.150627 | orchestrator | 00:01:18.150 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.150639 | orchestrator | 00:01:18.150 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.150649 | orchestrator | 00:01:18.150 STDOUT terraform:  } 2025-04-04 00:01:18.150688 | orchestrator | 00:01:18.150 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-04-04 00:01:18.150722 | orchestrator | 00:01:18.150 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.150747 | orchestrator | 00:01:18.150 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.150758 | orchestrator | 00:01:18.150 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.150785 | orchestrator | 00:01:18.150 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.150812 | orchestrator | 00:01:18.150 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.150840 | orchestrator | 00:01:18.150 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-04-04 00:01:18.150865 | orchestrator | 00:01:18.150 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.150876 | orchestrator | 00:01:18.150 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.150896 | orchestrator | 00:01:18.150 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.150907 | orchestrator | 00:01:18.150 STDOUT terraform:  } 2025-04-04 00:01:18.150940 | orchestrator | 00:01:18.150 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-04-04 00:01:18.150975 | orchestrator | 00:01:18.150 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.150999 | orchestrator | 00:01:18.150 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.151010 | orchestrator | 00:01:18.150 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.151038 | orchestrator | 00:01:18.151 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.151063 | orchestrator | 00:01:18.151 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.151093 | orchestrator | 00:01:18.151 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-04-04 00:01:18.151118 | orchestrator | 00:01:18.151 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.151129 | orchestrator | 00:01:18.151 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.151149 | orchestrator | 00:01:18.151 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.151160 | orchestrator | 00:01:18.151 STDOUT terraform:  } 2025-04-04 00:01:18.151193 | orchestrator | 00:01:18.151 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-04-04 00:01:18.151227 | orchestrator | 00:01:18.151 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.151260 | orchestrator | 00:01:18.151 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.151271 | orchestrator | 00:01:18.151 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.151296 | orchestrator | 00:01:18.151 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.151321 | orchestrator | 00:01:18.151 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.151351 | orchestrator | 00:01:18.151 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-04-04 00:01:18.151376 | orchestrator | 00:01:18.151 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.151387 | orchestrator | 00:01:18.151 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.151406 | orchestrator | 00:01:18.151 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.151417 | orchestrator | 00:01:18.151 STDOUT terraform:  } 2025-04-04 00:01:18.151450 | orchestrator | 00:01:18.151 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-04-04 00:01:18.151485 | orchestrator | 00:01:18.151 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.151509 | orchestrator | 00:01:18.151 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.151519 | orchestrator | 00:01:18.151 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.151548 | orchestrator | 00:01:18.151 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.151572 | orchestrator | 00:01:18.151 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.151604 | orchestrator | 00:01:18.151 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-04-04 00:01:18.151629 | orchestrator | 00:01:18.151 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.151640 | orchestrator | 00:01:18.151 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.151660 | orchestrator | 00:01:18.151 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.151670 | orchestrator | 00:01:18.151 STDOUT terraform:  } 2025-04-04 00:01:18.151703 | orchestrator | 00:01:18.151 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-04-04 00:01:18.151737 | orchestrator | 00:01:18.151 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.151762 | orchestrator | 00:01:18.151 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.151774 | orchestrator | 00:01:18.151 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.151802 | orchestrator | 00:01:18.151 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.151826 | orchestrator | 00:01:18.151 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.151856 | orchestrator | 00:01:18.151 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-04-04 00:01:18.151882 | orchestrator | 00:01:18.151 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.151893 | orchestrator | 00:01:18.151 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.151911 | orchestrator | 00:01:18.151 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.151922 | orchestrator | 00:01:18.151 STDOUT terraform:  } 2025-04-04 00:01:18.151956 | orchestrator | 00:01:18.151 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-04-04 00:01:18.151990 | orchestrator | 00:01:18.151 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.152015 | orchestrator | 00:01:18.151 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.152026 | orchestrator | 00:01:18.152 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.152053 | orchestrator | 00:01:18.152 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.152080 | orchestrator | 00:01:18.152 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.152109 | orchestrator | 00:01:18.152 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-04-04 00:01:18.152133 | orchestrator | 00:01:18.152 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.152144 | orchestrator | 00:01:18.152 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.152164 | orchestrator | 00:01:18.152 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.152176 | orchestrator | 00:01:18.152 STDOUT terraform:  } 2025-04-04 00:01:18.152208 | orchestrator | 00:01:18.152 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-04-04 00:01:18.152243 | orchestrator | 00:01:18.152 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.152283 | orchestrator | 00:01:18.152 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.152294 | orchestrator | 00:01:18.152 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.152322 | orchestrator | 00:01:18.152 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.152345 | orchestrator | 00:01:18.152 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.152376 | orchestrator | 00:01:18.152 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-04-04 00:01:18.152402 | orchestrator | 00:01:18.152 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.152412 | orchestrator | 00:01:18.152 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.152432 | orchestrator | 00:01:18.152 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.152443 | orchestrator | 00:01:18.152 STDOUT terraform:  } 2025-04-04 00:01:18.152496 | orchestrator | 00:01:18.152 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-04-04 00:01:18.152517 | orchestrator | 00:01:18.152 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-04 00:01:18.152528 | orchestrator | 00:01:18.152 STDOUT terraform:  + attachment = (known after apply) 2025-04-04 00:01:18.152550 | orchestrator | 00:01:18.152 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.152575 | orchestrator | 00:01:18.152 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.152600 | orchestrator | 00:01:18.152 STDOUT terraform:  + metadata = (known after apply) 2025-04-04 00:01:18.152631 | orchestrator | 00:01:18.152 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-04-04 00:01:18.152656 | orchestrator | 00:01:18.152 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.152667 | orchestrator | 00:01:18.152 STDOUT terraform:  + size = 20 2025-04-04 00:01:18.152688 | orchestrator | 00:01:18.152 STDOUT terraform:  + volume_type = "ssd" 2025-04-04 00:01:18.152699 | orchestrator | 00:01:18.152 STDOUT terraform:  } 2025-04-04 00:01:18.152731 | orchestrator | 00:01:18.152 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-04-04 00:01:18.152766 | orchestrator | 00:01:18.152 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-04-04 00:01:18.152794 | orchestrator | 00:01:18.152 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-04 00:01:18.152822 | orchestrator | 00:01:18.152 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-04 00:01:18.152850 | orchestrator | 00:01:18.152 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-04 00:01:18.152879 | orchestrator | 00:01:18.152 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.152898 | orchestrator | 00:01:18.152 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.152909 | orchestrator | 00:01:18.152 STDOUT terraform:  + config_drive = true 2025-04-04 00:01:18.152941 | orchestrator | 00:01:18.152 STDOUT terraform:  + created = (known after apply) 2025-04-04 00:01:18.152969 | orchestrator | 00:01:18.152 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-04 00:01:18.152993 | orchestrator | 00:01:18.152 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-04-04 00:01:18.153005 | orchestrator | 00:01:18.152 STDOUT terraform:  + force_delete = false 2025-04-04 00:01:18.153038 | orchestrator | 00:01:18.153 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.153066 | orchestrator | 00:01:18.153 STDOUT terraform:  + image_id = (known after apply) 2025-04-04 00:01:18.153094 | orchestrator | 00:01:18.153 STDOUT terraform:  + image_name = (known after apply) 2025-04-04 00:01:18.153115 | orchestrator | 00:01:18.153 STDOUT terraform:  + key_pair = "testbed" 2025-04-04 00:01:18.153140 | orchestrator | 00:01:18.153 STDOUT terraform:  + name = "testbed-manager" 2025-04-04 00:01:18.153160 | orchestrator | 00:01:18.153 STDOUT terraform:  + power_state = "active" 2025-04-04 00:01:18.153189 | orchestrator | 00:01:18.153 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.153217 | orchestrator | 00:01:18.153 STDOUT terraform:  + security_groups = (known after apply) 2025-04-04 00:01:18.153232 | orchestrator | 00:01:18.153 STDOUT terraform:  + stop_before_destroy = false 2025-04-04 00:01:18.153291 | orchestrator | 00:01:18.153 STDOUT terraform:  + updated = (known after apply) 2025-04-04 00:01:18.153304 | orchestrator | 00:01:18.153 STDOUT terraform:  + user_data = (known after apply) 2025-04-04 00:01:18.153315 | orchestrator | 00:01:18.153 STDOUT terraform:  + block_device { 2025-04-04 00:01:18.153340 | orchestrator | 00:01:18.153 STDOUT terraform:  + boot_index = 0 2025-04-04 00:01:18.153351 | orchestrator | 00:01:18.153 STDOUT terraform:  + delete_on_termination = false 2025-04-04 00:01:18.153361 | orchestrator | 00:01:18.153 STDOUT terraform:  + destination_type = "volume" 2025-04-04 00:01:18.153385 | orchestrator | 00:01:18.153 STDOUT terraform:  + multiattach = false 2025-04-04 00:01:18.153409 | orchestrator | 00:01:18.153 STDOUT terraform:  + source_type = "volume" 2025-04-04 00:01:18.153440 | orchestrator | 00:01:18.153 STDOUT terraform:  + uuid = (known after apply) 2025-04-04 00:01:18.153454 | orchestrator | 00:01:18.153 STDOUT terraform:  } 2025-04-04 00:01:18.153463 | orchestrator | 00:01:18.153 STDOUT terraform:  + network { 2025-04-04 00:01:18.153474 | orchestrator | 00:01:18.153 STDOUT terraform:  + access_network = false 2025-04-04 00:01:18.153495 | orchestrator | 00:01:18.153 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-04 00:01:18.153520 | orchestrator | 00:01:18.153 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-04 00:01:18.153546 | orchestrator | 00:01:18.153 STDOUT terraform:  + mac = (known after apply) 2025-04-04 00:01:18.153571 | orchestrator | 00:01:18.153 STDOUT terraform:  + name = (known after apply) 2025-04-04 00:01:18.153596 | orchestrator | 00:01:18.153 STDOUT terraform:  + port = (known after apply) 2025-04-04 00:01:18.153622 | orchestrator | 00:01:18.153 STDOUT terraform:  + uuid = (known after apply) 2025-04-04 00:01:18.153633 | orchestrator | 00:01:18.153 STDOUT terraform:  } 2025-04-04 00:01:18.153672 | orchestrator | 00:01:18.153 STDOUT terraform:  } 2025-04-04 00:01:18.153683 | orchestrator | 00:01:18.153 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-04-04 00:01:18.153708 | orchestrator | 00:01:18.153 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-04 00:01:18.153736 | orchestrator | 00:01:18.153 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-04 00:01:18.153765 | orchestrator | 00:01:18.153 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-04 00:01:18.153793 | orchestrator | 00:01:18.153 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-04 00:01:18.153821 | orchestrator | 00:01:18.153 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.153839 | orchestrator | 00:01:18.153 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.153850 | orchestrator | 00:01:18.153 STDOUT terraform:  + config_drive = true 2025-04-04 00:01:18.153881 | orchestrator | 00:01:18.153 STDOUT terraform:  + created = (known after apply) 2025-04-04 00:01:18.153910 | orchestrator | 00:01:18.153 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-04 00:01:18.153935 | orchestrator | 00:01:18.153 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-04 00:01:18.153954 | orchestrator | 00:01:18.153 STDOUT terraform:  + force_delete = false 2025-04-04 00:01:18.153983 | orchestrator | 00:01:18.153 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.154025 | orchestrator | 00:01:18.153 STDOUT terraform:  + image_id = (known after apply) 2025-04-04 00:01:18.154052 | orchestrator | 00:01:18.154 STDOUT terraform:  + image_name = (known after apply) 2025-04-04 00:01:18.154072 | orchestrator | 00:01:18.154 STDOUT terraform:  + key_pair = "testbed" 2025-04-04 00:01:18.154097 | orchestrator | 00:01:18.154 STDOUT terraform:  + name = "testbed-node-0" 2025-04-04 00:01:18.154117 | orchestrator | 00:01:18.154 STDOUT terraform:  + power_state = "active" 2025-04-04 00:01:18.154145 | orchestrator | 00:01:18.154 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.154173 | orchestrator | 00:01:18.154 STDOUT terraform:  + security_groups = (known after apply) 2025-04-04 00:01:18.154192 | orchestrator | 00:01:18.154 STDOUT terraform:  + stop_before_destroy = false 2025-04-04 00:01:18.154220 | orchestrator | 00:01:18.154 STDOUT terraform:  + updated = (known after apply) 2025-04-04 00:01:18.154274 | orchestrator | 00:01:18.154 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-04 00:01:18.154285 | orchestrator | 00:01:18.154 STDOUT terraform:  + block_device { 2025-04-04 00:01:18.154305 | orchestrator | 00:01:18.154 STDOUT terraform:  + boot_index = 0 2025-04-04 00:01:18.154328 | orchestrator | 00:01:18.154 STDOUT terraform:  + delete_on_termination = false 2025-04-04 00:01:18.154352 | orchestrator | 00:01:18.154 STDOUT terraform:  + destination_type = "volume" 2025-04-04 00:01:18.154375 | orchestrator | 00:01:18.154 STDOUT terraform:  + multiattach = false 2025-04-04 00:01:18.154399 | orchestrator | 00:01:18.154 STDOUT terraform:  + source_type = "volume" 2025-04-04 00:01:18.154430 | orchestrator | 00:01:18.154 STDOUT terraform:  + uuid = (known after apply) 2025-04-04 00:01:18.154440 | orchestrator | 00:01:18.154 STDOUT terraform:  } 2025-04-04 00:01:18.154461 | orchestrator | 00:01:18.154 STDOUT terraform:  + network { 2025-04-04 00:01:18.154472 | orchestrator | 00:01:18.154 STDOUT terraform:  + access_network = false 2025-04-04 00:01:18.154482 | orchestrator | 00:01:18.154 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-04 00:01:18.154510 | orchestrator | 00:01:18.154 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-04 00:01:18.154536 | orchestrator | 00:01:18.154 STDOUT terraform:  + mac = (known after apply) 2025-04-04 00:01:18.154561 | orchestrator | 00:01:18.154 STDOUT terraform:  + name = (known after apply) 2025-04-04 00:01:18.154589 | orchestrator | 00:01:18.154 STDOUT terraform:  + port = (known after apply) 2025-04-04 00:01:18.154612 | orchestrator | 00:01:18.154 STDOUT terraform:  + uuid = (known after apply) 2025-04-04 00:01:18.154626 | orchestrator | 00:01:18.154 STDOUT terraform:  } 2025-04-04 00:01:18.154661 | orchestrator | 00:01:18.154 STDOUT terraform:  } 2025-04-04 00:01:18.154671 | orchestrator | 00:01:18.154 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-04-04 00:01:18.154697 | orchestrator | 00:01:18.154 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-04 00:01:18.154725 | orchestrator | 00:01:18.154 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-04 00:01:18.154752 | orchestrator | 00:01:18.154 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-04 00:01:18.154781 | orchestrator | 00:01:18.154 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-04 00:01:18.154810 | orchestrator | 00:01:18.154 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.154829 | orchestrator | 00:01:18.154 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.154839 | orchestrator | 00:01:18.154 STDOUT terraform:  + config_drive = true 2025-04-04 00:01:18.154871 | orchestrator | 00:01:18.154 STDOUT terraform:  + created = (known after apply) 2025-04-04 00:01:18.154901 | orchestrator | 00:01:18.154 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-04 00:01:18.154927 | orchestrator | 00:01:18.154 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-04 00:01:18.154938 | orchestrator | 00:01:18.154 STDOUT terraform:  + force_delete = false 2025-04-04 00:01:18.154969 | orchestrator | 00:01:18.154 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.154997 | orchestrator | 00:01:18.154 STDOUT terraform:  + image_id = (known after apply) 2025-04-04 00:01:18.155025 | orchestrator | 00:01:18.154 STDOUT terraform:  + image_name = (known after apply) 2025-04-04 00:01:18.155045 | orchestrator | 00:01:18.155 STDOUT terraform:  + key_pair = "testbed" 2025-04-04 00:01:18.155071 | orchestrator | 00:01:18.155 STDOUT terraform:  + name = "testbed-node-1" 2025-04-04 00:01:18.155090 | orchestrator | 00:01:18.155 STDOUT terraform:  + power_state = "active" 2025-04-04 00:01:18.155119 | orchestrator | 00:01:18.155 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.155146 | orchestrator | 00:01:18.155 STDOUT terraform:  + security_groups = (known after apply) 2025-04-04 00:01:18.155165 | orchestrator | 00:01:18.155 STDOUT terraform:  + stop_before_destroy = false 2025-04-04 00:01:18.155194 | orchestrator | 00:01:18.155 STDOUT terraform:  + updated = (known after apply) 2025-04-04 00:01:18.155234 | orchestrator | 00:01:18.155 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-04 00:01:18.155244 | orchestrator | 00:01:18.155 STDOUT terraform:  + block_device { 2025-04-04 00:01:18.155283 | orchestrator | 00:01:18.155 STDOUT terraform:  + boot_index = 0 2025-04-04 00:01:18.155309 | orchestrator | 00:01:18.155 STDOUT terraform:  + delete_on_termination = false 2025-04-04 00:01:18.155319 | orchestrator | 00:01:18.155 STDOUT terraform:  + destination_type = "volume" 2025-04-04 00:01:18.155329 | orchestrator | 00:01:18.155 STDOUT terraform:  + multiattach = false 2025-04-04 00:01:18.155354 | orchestrator | 00:01:18.155 STDOUT terraform:  + source_type = "volume" 2025-04-04 00:01:18.155385 | orchestrator | 00:01:18.155 STDOUT terraform:  + uuid = (known after apply) 2025-04-04 00:01:18.155395 | orchestrator | 00:01:18.155 STDOUT terraform:  } 2025-04-04 00:01:18.155405 | orchestrator | 00:01:18.155 STDOUT terraform:  + network { 2025-04-04 00:01:18.155414 | orchestrator | 00:01:18.155 STDOUT terraform:  + access_network = false 2025-04-04 00:01:18.155439 | orchestrator | 00:01:18.155 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-04 00:01:18.155464 | orchestrator | 00:01:18.155 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-04 00:01:18.155490 | orchestrator | 00:01:18.155 STDOUT terraform:  + mac = (known after apply) 2025-04-04 00:01:18.155515 | orchestrator | 00:01:18.155 STDOUT terraform:  + name = (known after apply) 2025-04-04 00:01:18.155541 | orchestrator | 00:01:18.155 STDOUT terraform:  + port = (known after apply) 2025-04-04 00:01:18.155566 | orchestrator | 00:01:18.155 STDOUT terraform:  + uuid = (known after apply) 2025-04-04 00:01:18.155575 | orchestrator | 00:01:18.155 STDOUT terraform:  } 2025-04-04 00:01:18.155584 | orchestrator | 00:01:18.155 STDOUT terraform:  } 2025-04-04 00:01:18.155617 | orchestrator | 00:01:18.155 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-04-04 00:01:18.155652 | orchestrator | 00:01:18.155 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-04 00:01:18.155680 | orchestrator | 00:01:18.155 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-04 00:01:18.155708 | orchestrator | 00:01:18.155 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-04 00:01:18.155740 | orchestrator | 00:01:18.155 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-04 00:01:18.155764 | orchestrator | 00:01:18.155 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.155774 | orchestrator | 00:01:18.155 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.155795 | orchestrator | 00:01:18.155 STDOUT terraform:  + config_drive = true 2025-04-04 00:01:18.155825 | orchestrator | 00:01:18.155 STDOUT terraform:  + created = (known after apply) 2025-04-04 00:01:18.155847 | orchestrator | 00:01:18.155 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-04 00:01:18.155870 | orchestrator | 00:01:18.155 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-04 00:01:18.155880 | orchestrator | 00:01:18.155 STDOUT terraform:  + force_delete = false 2025-04-04 00:01:18.155918 | orchestrator | 00:01:18.155 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.155941 | orchestrator | 00:01:18.155 STDOUT terraform:  + image_id = (known after apply) 2025-04-04 00:01:18.155970 | orchestrator | 00:01:18.155 STDOUT terraform:  + image_name = (known after apply) 2025-04-04 00:01:18.155980 | orchestrator | 00:01:18.155 STDOUT terraform:  + key_pair = "testbed" 2025-04-04 00:01:18.156014 | orchestrator | 00:01:18.155 STDOUT terraform:  + name = "testbed-node-2" 2025-04-04 00:01:18.156028 | orchestrator | 00:01:18.156 STDOUT terraform:  + power_state = "active" 2025-04-04 00:01:18.156058 | orchestrator | 00:01:18.156 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.156085 | orchestrator | 00:01:18.156 STDOUT terraform:  + security_groups = (known after apply) 2025-04-04 00:01:18.156095 | orchestrator | 00:01:18.156 STDOUT terraform:  + stop_before_destroy = false 2025-04-04 00:01:18.156130 | orchestrator | 00:01:18.156 STDOUT terraform:  + updated = (known after apply) 2025-04-04 00:01:18.156169 | orchestrator | 00:01:18.156 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-04 00:01:18.156178 | orchestrator | 00:01:18.156 STDOUT terraform:  + block_device { 2025-04-04 00:01:18.156204 | orchestrator | 00:01:18.156 STDOUT terraform:  + boot_index = 0 2025-04-04 00:01:18.156213 | orchestrator | 00:01:18.156 STDOUT terraform:  + delete_on_termination = false 2025-04-04 00:01:18.156243 | orchestrator | 00:01:18.156 STDOUT terraform:  + destination_type = "volume" 2025-04-04 00:01:18.156261 | orchestrator | 00:01:18.156 STDOUT terraform:  + multiattach = false 2025-04-04 00:01:18.156291 | orchestrator | 00:01:18.156 STDOUT terraform:  + source_type = "volume" 2025-04-04 00:01:18.156322 | orchestrator | 00:01:18.156 STDOUT terraform:  + uuid = (known after apply) 2025-04-04 00:01:18.156330 | orchestrator | 00:01:18.156 STDOUT terraform:  } 2025-04-04 00:01:18.156339 | orchestrator | 00:01:18.156 STDOUT terraform:  + network { 2025-04-04 00:01:18.156348 | orchestrator | 00:01:18.156 STDOUT terraform:  + access_network = false 2025-04-04 00:01:18.156377 | orchestrator | 00:01:18.156 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-04 00:01:18.156398 | orchestrator | 00:01:18.156 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-04 00:01:18.156424 | orchestrator | 00:01:18.156 STDOUT terraform:  + mac = (known after apply) 2025-04-04 00:01:18.156445 | orchestrator | 00:01:18.156 STDOUT terraform:  + name = (known after apply) 2025-04-04 00:01:18.156471 | orchestrator | 00:01:18.156 STDOUT terraform:  + port = (known after apply) 2025-04-04 00:01:18.156498 | orchestrator | 00:01:18.156 STDOUT terraform:  + uuid = (known after apply) 2025-04-04 00:01:18.156505 | orchestrator | 00:01:18.156 STDOUT terraform:  } 2025-04-04 00:01:18.156514 | orchestrator | 00:01:18.156 STDOUT terraform:  } 2025-04-04 00:01:18.156548 | orchestrator | 00:01:18.156 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-04-04 00:01:18.156581 | orchestrator | 00:01:18.156 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-04 00:01:18.156609 | orchestrator | 00:01:18.156 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-04 00:01:18.156637 | orchestrator | 00:01:18.156 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-04 00:01:18.156666 | orchestrator | 00:01:18.156 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-04 00:01:18.156695 | orchestrator | 00:01:18.156 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.156711 | orchestrator | 00:01:18.156 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.156720 | orchestrator | 00:01:18.156 STDOUT terraform:  + config_drive = true 2025-04-04 00:01:18.156753 | orchestrator | 00:01:18.156 STDOUT terraform:  + created = (known after apply) 2025-04-04 00:01:18.156781 | orchestrator | 00:01:18.156 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-04 00:01:18.156805 | orchestrator | 00:01:18.156 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-04 00:01:18.156814 | orchestrator | 00:01:18.156 STDOUT terraform:  + force_delete = false 2025-04-04 00:01:18.156848 | orchestrator | 00:01:18.156 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.156878 | orchestrator | 00:01:18.156 STDOUT terraform:  + image_id = (known after apply) 2025-04-04 00:01:18.156904 | orchestrator | 00:01:18.156 STDOUT terraform:  + image_name = (known after apply) 2025-04-04 00:01:18.156929 | orchestrator | 00:01:18.156 STDOUT terraform:  + key_pair = "testbed" 2025-04-04 00:01:18.156939 | orchestrator | 00:01:18.156 STDOUT terraform:  + name = "testbed-node-3" 2025-04-04 00:01:18.156965 | orchestrator | 00:01:18.156 STDOUT terraform:  + power_state = "active" 2025-04-04 00:01:18.156994 | orchestrator | 00:01:18.156 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.157025 | orchestrator | 00:01:18.156 STDOUT terraform:  + security_groups = (known after apply) 2025-04-04 00:01:18.157035 | orchestrator | 00:01:18.157 STDOUT terraform:  + stop_before_destroy = false 2025-04-04 00:01:18.157071 | orchestrator | 00:01:18.157 STDOUT terraform:  + updated = (known after apply) 2025-04-04 00:01:18.157111 | orchestrator | 00:01:18.157 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-04 00:01:18.157120 | orchestrator | 00:01:18.157 STDOUT terraform:  + block_device { 2025-04-04 00:01:18.157147 | orchestrator | 00:01:18.157 STDOUT terraform:  + boot_index = 0 2025-04-04 00:01:18.157156 | orchestrator | 00:01:18.157 STDOUT terraform:  + delete_on_termination = false 2025-04-04 00:01:18.157184 | orchestrator | 00:01:18.157 STDOUT terraform:  + destination_type = "volume" 2025-04-04 00:01:18.157205 | orchestrator | 00:01:18.157 STDOUT terraform:  + multiattach = false 2025-04-04 00:01:18.157226 | orchestrator | 00:01:18.157 STDOUT terraform:  + source_type = "volume" 2025-04-04 00:01:18.157265 | orchestrator | 00:01:18.157 STDOUT terraform:  + uuid = (known after apply) 2025-04-04 00:01:18.157275 | orchestrator | 00:01:18.157 STDOUT terraform:  } 2025-04-04 00:01:18.157284 | orchestrator | 00:01:18.157 STDOUT terraform:  + network { 2025-04-04 00:01:18.157293 | orchestrator | 00:01:18.157 STDOUT terraform:  + access_network = false 2025-04-04 00:01:18.157326 | orchestrator | 00:01:18.157 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-04 00:01:18.157347 | orchestrator | 00:01:18.157 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-04 00:01:18.157374 | orchestrator | 00:01:18.157 STDOUT terraform:  + mac = (known after apply) 2025-04-04 00:01:18.157389 | orchestrator | 00:01:18.157 STDOUT terraform:  + name = (known after apply) 2025-04-04 00:01:18.157421 | orchestrator | 00:01:18.157 STDOUT terraform:  + port = (known after apply) 2025-04-04 00:01:18.157443 | orchestrator | 00:01:18.157 STDOUT terraform:  + uuid = (known after apply) 2025-04-04 00:01:18.157467 | orchestrator | 00:01:18.157 STDOUT terraform:  } 2025-04-04 00:01:18.157476 | orchestrator | 00:01:18.157 STDOUT terraform:  } 2025-04-04 00:01:18.157497 | orchestrator | 00:01:18.157 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-04-04 00:01:18.157528 | orchestrator | 00:01:18.157 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-04 00:01:18.157557 | orchestrator | 00:01:18.157 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-04 00:01:18.157585 | orchestrator | 00:01:18.157 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-04 00:01:18.157613 | orchestrator | 00:01:18.157 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-04 00:01:18.157643 | orchestrator | 00:01:18.157 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.157652 | orchestrator | 00:01:18.157 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.157672 | orchestrator | 00:01:18.157 STDOUT terraform:  + config_drive = true 2025-04-04 00:01:18.157702 | orchestrator | 00:01:18.157 STDOUT terraform:  + created = (known after apply) 2025-04-04 00:01:18.157730 | orchestrator | 00:01:18.157 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-04 00:01:18.157751 | orchestrator | 00:01:18.157 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-04 00:01:18.157760 | orchestrator | 00:01:18.157 STDOUT terraform:  + force_delete = false 2025-04-04 00:01:18.157796 | orchestrator | 00:01:18.157 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.157824 | orchestrator | 00:01:18.157 STDOUT terraform:  + image_id = (known after apply) 2025-04-04 00:01:18.157853 | orchestrator | 00:01:18.157 STDOUT terraform:  + image_name = (known after apply) 2025-04-04 00:01:18.157879 | orchestrator | 00:01:18.157 STDOUT terraform:  + key_pair = "testbed" 2025-04-04 00:01:18.157888 | orchestrator | 00:01:18.157 STDOUT terraform:  + name = "testbed-node-4" 2025-04-04 00:01:18.157916 | orchestrator | 00:01:18.157 STDOUT terraform:  + power_state = "active" 2025-04-04 00:01:18.157944 | orchestrator | 00:01:18.157 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.157972 | orchestrator | 00:01:18.157 STDOUT terraform:  + security_groups = (known after apply) 2025-04-04 00:01:18.157981 | orchestrator | 00:01:18.157 STDOUT terraform:  + stop_before_destroy = false 2025-04-04 00:01:18.158029 | orchestrator | 00:01:18.157 STDOUT terraform:  + updated = (known after apply) 2025-04-04 00:01:18.158069 | orchestrator | 00:01:18.158 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-04 00:01:18.158079 | orchestrator | 00:01:18.158 STDOUT terraform:  + block_device { 2025-04-04 00:01:18.158087 | orchestrator | 00:01:18.158 STDOUT terraform:  + boot_index = 0 2025-04-04 00:01:18.158119 | orchestrator | 00:01:18.158 STDOUT terraform:  + delete_on_termination = false 2025-04-04 00:01:18.158140 | orchestrator | 00:01:18.158 STDOUT terraform:  + destination_type = "volume" 2025-04-04 00:01:18.158161 | orchestrator | 00:01:18.158 STDOUT terraform:  + multiattach = false 2025-04-04 00:01:18.158182 | orchestrator | 00:01:18.158 STDOUT terraform:  + source_type = "volume" 2025-04-04 00:01:18.158213 | orchestrator | 00:01:18.158 STDOUT terraform:  + uuid = (known after apply) 2025-04-04 00:01:18.158238 | orchestrator | 00:01:18.158 STDOUT terraform:  } 2025-04-04 00:01:18.158256 | orchestrator | 00:01:18.158 STDOUT terraform:  + network { 2025-04-04 00:01:18.158276 | orchestrator | 00:01:18.158 STDOUT terraform:  + access_network = false 2025-04-04 00:01:18.158288 | orchestrator | 00:01:18.158 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-04 00:01:18.158297 | orchestrator | 00:01:18.158 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-04 00:01:18.158324 | orchestrator | 00:01:18.158 STDOUT terraform:  + mac = (known after apply) 2025-04-04 00:01:18.158345 | orchestrator | 00:01:18.158 STDOUT terraform:  + name = (known after apply) 2025-04-04 00:01:18.158371 | orchestrator | 00:01:18.158 STDOUT terraform:  + port = (known after apply) 2025-04-04 00:01:18.158397 | orchestrator | 00:01:18.158 STDOUT terraform:  + uuid = (known after apply) 2025-04-04 00:01:18.158404 | orchestrator | 00:01:18.158 STDOUT terraform:  } 2025-04-04 00:01:18.158413 | orchestrator | 00:01:18.158 STDOUT terraform:  } 2025-04-04 00:01:18.158449 | orchestrator | 00:01:18.158 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-04-04 00:01:18.158482 | orchestrator | 00:01:18.158 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-04 00:01:18.158510 | orchestrator | 00:01:18.158 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-04 00:01:18.158538 | orchestrator | 00:01:18.158 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-04 00:01:18.158566 | orchestrator | 00:01:18.158 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-04 00:01:18.158594 | orchestrator | 00:01:18.158 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.158603 | orchestrator | 00:01:18.158 STDOUT terraform:  + availability_zone = "nova" 2025-04-04 00:01:18.158624 | orchestrator | 00:01:18.158 STDOUT terraform:  + config_drive = true 2025-04-04 00:01:18.158653 | orchestrator | 00:01:18.158 STDOUT terraform:  + created = (known after apply) 2025-04-04 00:01:18.158681 | orchestrator | 00:01:18.158 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-04 00:01:18.158705 | orchestrator | 00:01:18.158 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-04 00:01:18.158714 | orchestrator | 00:01:18.158 STDOUT terraform:  + force_delete = false 2025-04-04 00:01:18.158747 | orchestrator | 00:01:18.158 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.158775 | orchestrator | 00:01:18.158 STDOUT terraform:  + image_id = (known after apply) 2025-04-04 00:01:18.158801 | orchestrator | 00:01:18.158 STDOUT terraform:  + image_name = (known after apply) 2025-04-04 00:01:18.158827 | orchestrator | 00:01:18.158 STDOUT terraform:  + key_pair = "testbed" 2025-04-04 00:01:18.158837 | orchestrator | 00:01:18.158 STDOUT terraform:  + name = "testbed-node-5" 2025-04-04 00:01:18.158864 | orchestrator | 00:01:18.158 STDOUT terraform:  + power_state = "active" 2025-04-04 00:01:18.158891 | orchestrator | 00:01:18.158 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.158919 | orchestrator | 00:01:18.158 STDOUT terraform:  + security_groups = (known after apply) 2025-04-04 00:01:18.158929 | orchestrator | 00:01:18.158 STDOUT terraform:  + stop_before_destroy = false 2025-04-04 00:01:18.158963 | orchestrator | 00:01:18.158 STDOUT terraform:  + updated = (known after apply) 2025-04-04 00:01:18.159002 | orchestrator | 00:01:18.158 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-04 00:01:18.159011 | orchestrator | 00:01:18.158 STDOUT terraform:  + block_device { 2025-04-04 00:01:18.159037 | orchestrator | 00:01:18.159 STDOUT terraform:  + boot_index = 0 2025-04-04 00:01:18.159047 | orchestrator | 00:01:18.159 STDOUT terraform:  + delete_on_termination = false 2025-04-04 00:01:18.159076 | orchestrator | 00:01:18.159 STDOUT terraform:  + destination_type = "volume" 2025-04-04 00:01:18.159097 | orchestrator | 00:01:18.159 STDOUT terraform:  + multiattach = false 2025-04-04 00:01:18.159117 | orchestrator | 00:01:18.159 STDOUT terraform:  + source_type = "volume" 2025-04-04 00:01:18.159148 | orchestrator | 00:01:18.159 STDOUT terraform:  + uuid = (known after apply) 2025-04-04 00:01:18.159156 | orchestrator | 00:01:18.159 STDOUT terraform:  } 2025-04-04 00:01:18.159165 | orchestrator | 00:01:18.159 STDOUT terraform:  + network { 2025-04-04 00:01:18.159173 | orchestrator | 00:01:18.159 STDOUT terraform:  + access_network = false 2025-04-04 00:01:18.159205 | orchestrator | 00:01:18.159 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-04 00:01:18.159230 | orchestrator | 00:01:18.159 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-04 00:01:18.159281 | orchestrator | 00:01:18.159 STDOUT terraform:  + mac = (known after apply) 2025-04-04 00:01:18.159309 | orchestrator | 00:01:18.159 STDOUT terraform:  + name = (known after apply) 2025-04-04 00:01:18.159318 | orchestrator | 00:01:18.159 STDOUT terraform:  + port = (known after apply) 2025-04-04 00:01:18.159327 | orchestrator | 00:01:18.159 STDOUT terraform:  + uuid = (known after apply) 2025-04-04 00:01:18.159335 | orchestrator | 00:01:18.159 STDOUT terraform:  } 2025-04-04 00:01:18.159344 | orchestrator | 00:01:18.159 STDOUT terraform:  } 2025-04-04 00:01:18.159378 | orchestrator | 00:01:18.159 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-04-04 00:01:18.159400 | orchestrator | 00:01:18.159 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-04-04 00:01:18.159421 | orchestrator | 00:01:18.159 STDOUT terraform:  + fingerprint = (known after apply) 2025-04-04 00:01:18.159442 | orchestrator | 00:01:18.159 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.159455 | orchestrator | 00:01:18.159 STDOUT terraform:  + name = "testbed" 2025-04-04 00:01:18.159464 | orchestrator | 00:01:18.159 STDOUT terraform:  + private_key = (sensitive value) 2025-04-04 00:01:18.159494 | orchestrator | 00:01:18.159 STDOUT terraform:  + public_key = (known after apply) 2025-04-04 00:01:18.159515 | orchestrator | 00:01:18.159 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.159536 | orchestrator | 00:01:18.159 STDOUT terraform:  + user_id = (known after apply) 2025-04-04 00:01:18.159582 | orchestrator | 00:01:18.159 STDOUT terraform:  } 2025-04-04 00:01:18.159592 | orchestrator | 00:01:18.159 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-04-04 00:01:18.159623 | orchestrator | 00:01:18.159 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.159643 | orchestrator | 00:01:18.159 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.159664 | orchestrator | 00:01:18.159 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.159682 | orchestrator | 00:01:18.159 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.159705 | orchestrator | 00:01:18.159 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.159728 | orchestrator | 00:01:18.159 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.159774 | orchestrator | 00:01:18.159 STDOUT terraform:  } 2025-04-04 00:01:18.159782 | orchestrator | 00:01:18.159 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-04-04 00:01:18.159815 | orchestrator | 00:01:18.159 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.159839 | orchestrator | 00:01:18.159 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.159865 | orchestrator | 00:01:18.159 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.159883 | orchestrator | 00:01:18.159 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.159908 | orchestrator | 00:01:18.159 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.159929 | orchestrator | 00:01:18.159 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.159938 | orchestrator | 00:01:18.159 STDOUT terraform:  } 2025-04-04 00:01:18.159977 | orchestrator | 00:01:18.159 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-04-04 00:01:18.160016 | orchestrator | 00:01:18.159 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.160039 | orchestrator | 00:01:18.160 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.160063 | orchestrator | 00:01:18.160 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.160086 | orchestrator | 00:01:18.160 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.160104 | orchestrator | 00:01:18.160 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.160129 | orchestrator | 00:01:18.160 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.160175 | orchestrator | 00:01:18.160 STDOUT terraform:  } 2025-04-04 00:01:18.160184 | orchestrator | 00:01:18.160 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-04-04 00:01:18.160216 | orchestrator | 00:01:18.160 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.160240 | orchestrator | 00:01:18.160 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.160265 | orchestrator | 00:01:18.160 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.160284 | orchestrator | 00:01:18.160 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.160308 | orchestrator | 00:01:18.160 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.160327 | orchestrator | 00:01:18.160 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.160335 | orchestrator | 00:01:18.160 STDOUT terraform:  } 2025-04-04 00:01:18.160376 | orchestrator | 00:01:18.160 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-04-04 00:01:18.160415 | orchestrator | 00:01:18.160 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.160434 | orchestrator | 00:01:18.160 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.160458 | orchestrator | 00:01:18.160 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.160477 | orchestrator | 00:01:18.160 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.160501 | orchestrator | 00:01:18.160 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.160525 | orchestrator | 00:01:18.160 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.160572 | orchestrator | 00:01:18.160 STDOUT terraform:  } 2025-04-04 00:01:18.160580 | orchestrator | 00:01:18.160 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-04-04 00:01:18.160612 | orchestrator | 00:01:18.160 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.160631 | orchestrator | 00:01:18.160 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.160656 | orchestrator | 00:01:18.160 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.160674 | orchestrator | 00:01:18.160 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.160698 | orchestrator | 00:01:18.160 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.160721 | orchestrator | 00:01:18.160 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.160766 | orchestrator | 00:01:18.160 STDOUT terraform:  } 2025-04-04 00:01:18.160775 | orchestrator | 00:01:18.160 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-04-04 00:01:18.160807 | orchestrator | 00:01:18.160 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.160825 | orchestrator | 00:01:18.160 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.160850 | orchestrator | 00:01:18.160 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.160874 | orchestrator | 00:01:18.160 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.160896 | orchestrator | 00:01:18.160 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.160915 | orchestrator | 00:01:18.160 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.160962 | orchestrator | 00:01:18.160 STDOUT terraform:  } 2025-04-04 00:01:18.160971 | orchestrator | 00:01:18.160 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-04-04 00:01:18.161002 | orchestrator | 00:01:18.160 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.161021 | orchestrator | 00:01:18.160 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.161046 | orchestrator | 00:01:18.161 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.161064 | orchestrator | 00:01:18.161 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.161091 | orchestrator | 00:01:18.161 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.161110 | orchestrator | 00:01:18.161 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.161155 | orchestrator | 00:01:18.161 STDOUT terraform:  } 2025-04-04 00:01:18.161163 | orchestrator | 00:01:18.161 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-04-04 00:01:18.161195 | orchestrator | 00:01:18.161 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.161219 | orchestrator | 00:01:18.161 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.161242 | orchestrator | 00:01:18.161 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.161270 | orchestrator | 00:01:18.161 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.161298 | orchestrator | 00:01:18.161 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.161317 | orchestrator | 00:01:18.161 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.161364 | orchestrator | 00:01:18.161 STDOUT terraform:  } 2025-04-04 00:01:18.161373 | orchestrator | 00:01:18.161 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-04-04 00:01:18.161404 | orchestrator | 00:01:18.161 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.161428 | orchestrator | 00:01:18.161 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.161447 | orchestrator | 00:01:18.161 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.161470 | orchestrator | 00:01:18.161 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.161489 | orchestrator | 00:01:18.161 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.161514 | orchestrator | 00:01:18.161 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.161562 | orchestrator | 00:01:18.161 STDOUT terraform:  } 2025-04-04 00:01:18.161571 | orchestrator | 00:01:18.161 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-04-04 00:01:18.161603 | orchestrator | 00:01:18.161 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.161630 | orchestrator | 00:01:18.161 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.161642 | orchestrator | 00:01:18.161 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.161666 | orchestrator | 00:01:18.161 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.161688 | orchestrator | 00:01:18.161 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.161705 | orchestrator | 00:01:18.161 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.161713 | orchestrator | 00:01:18.161 STDOUT terraform:  } 2025-04-04 00:01:18.161756 | orchestrator | 00:01:18.161 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-04-04 00:01:18.161795 | orchestrator | 00:01:18.161 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.161818 | orchestrator | 00:01:18.161 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.161841 | orchestrator | 00:01:18.161 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.161860 | orchestrator | 00:01:18.161 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.161896 | orchestrator | 00:01:18.161 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.161904 | orchestrator | 00:01:18.161 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.161911 | orchestrator | 00:01:18.161 STDOUT terraform:  } 2025-04-04 00:01:18.161961 | orchestrator | 00:01:18.161 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-04-04 00:01:18.161989 | orchestrator | 00:01:18.161 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.162024 | orchestrator | 00:01:18.161 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.162043 | orchestrator | 00:01:18.162 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.162065 | orchestrator | 00:01:18.162 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.162088 | orchestrator | 00:01:18.162 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.162110 | orchestrator | 00:01:18.162 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.162118 | orchestrator | 00:01:18.162 STDOUT terraform:  } 2025-04-04 00:01:18.162160 | orchestrator | 00:01:18.162 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-04-04 00:01:18.162199 | orchestrator | 00:01:18.162 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.162222 | orchestrator | 00:01:18.162 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.162256 | orchestrator | 00:01:18.162 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.162276 | orchestrator | 00:01:18.162 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.162300 | orchestrator | 00:01:18.162 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.162322 | orchestrator | 00:01:18.162 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.162336 | orchestrator | 00:01:18.162 STDOUT terraform:  } 2025-04-04 00:01:18.162372 | orchestrator | 00:01:18.162 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-04-04 00:01:18.162410 | orchestrator | 00:01:18.162 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.162432 | orchestrator | 00:01:18.162 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.162455 | orchestrator | 00:01:18.162 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.162478 | orchestrator | 00:01:18.162 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.162500 | orchestrator | 00:01:18.162 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.162523 | orchestrator | 00:01:18.162 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.162533 | orchestrator | 00:01:18.162 STDOUT terraform:  } 2025-04-04 00:01:18.163234 | orchestrator | 00:01:18.162 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-04-04 00:01:18.163319 | orchestrator | 00:01:18.163 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.163350 | orchestrator | 00:01:18.163 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.163391 | orchestrator | 00:01:18.163 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.163419 | orchestrator | 00:01:18.163 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.163451 | orchestrator | 00:01:18.163 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.163482 | orchestrator | 00:01:18.163 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.163487 | orchestrator | 00:01:18.163 STDOUT terraform:  } 2025-04-04 00:01:18.163554 | orchestrator | 00:01:18.163 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-04-04 00:01:18.163601 | orchestrator | 00:01:18.163 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.163633 | orchestrator | 00:01:18.163 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.163663 | orchestrator | 00:01:18.163 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.163693 | orchestrator | 00:01:18.163 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.163723 | orchestrator | 00:01:18.163 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.163753 | orchestrator | 00:01:18.163 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.163758 | orchestrator | 00:01:18.163 STDOUT terraform:  } 2025-04-04 00:01:18.163823 | orchestrator | 00:01:18.163 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-04-04 00:01:18.163871 | orchestrator | 00:01:18.163 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-04 00:01:18.163901 | orchestrator | 00:01:18.163 STDOUT terraform:  + device = (known after apply) 2025-04-04 00:01:18.163931 | orchestrator | 00:01:18.163 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.163962 | orchestrator | 00:01:18.163 STDOUT terraform:  + instance_id = (known after apply) 2025-04-04 00:01:18.163993 | orchestrator | 00:01:18.163 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.164022 | orchestrator | 00:01:18.163 STDOUT terraform:  + volume_id = (known after apply) 2025-04-04 00:01:18.164027 | orchestrator | 00:01:18.164 STDOUT terraform:  } 2025-04-04 00:01:18.164092 | orchestrator | 00:01:18.164 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-04-04 00:01:18.164151 | orchestrator | 00:01:18.164 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-04-04 00:01:18.164182 | orchestrator | 00:01:18.164 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-04 00:01:18.164211 | orchestrator | 00:01:18.164 STDOUT terraform:  + floating_ip = (known after apply) 2025-04-04 00:01:18.164241 | orchestrator | 00:01:18.164 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.164281 | orchestrator | 00:01:18.164 STDOUT terraform:  + port_id = (known after apply) 2025-04-04 00:01:18.164314 | orchestrator | 00:01:18.164 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.164319 | orchestrator | 00:01:18.164 STDOUT terraform:  } 2025-04-04 00:01:18.164371 | orchestrator | 00:01:18.164 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-04-04 00:01:18.164421 | orchestrator | 00:01:18.164 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-04-04 00:01:18.164448 | orchestrator | 00:01:18.164 STDOUT terraform:  + address = (known after apply) 2025-04-04 00:01:18.164474 | orchestrator | 00:01:18.164 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.164500 | orchestrator | 00:01:18.164 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-04 00:01:18.164527 | orchestrator | 00:01:18.164 STDOUT terraform:  + dns_name = (known after apply) 2025-04-04 00:01:18.164553 | orchestrator | 00:01:18.164 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-04 00:01:18.164580 | orchestrator | 00:01:18.164 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.164601 | orchestrator | 00:01:18.164 STDOUT terraform:  + pool = "public" 2025-04-04 00:01:18.164628 | orchestrator | 00:01:18.164 STDOUT terraform:  + port_id = (known after apply) 2025-04-04 00:01:18.164654 | orchestrator | 00:01:18.164 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.164680 | orchestrator | 00:01:18.164 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-04 00:01:18.164707 | orchestrator | 00:01:18.164 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.164713 | orchestrator | 00:01:18.164 STDOUT terraform:  } 2025-04-04 00:01:18.164765 | orchestrator | 00:01:18.164 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-04-04 00:01:18.164812 | orchestrator | 00:01:18.164 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-04-04 00:01:18.164851 | orchestrator | 00:01:18.164 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-04 00:01:18.164893 | orchestrator | 00:01:18.164 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.164917 | orchestrator | 00:01:18.164 STDOUT terraform:  + availability_zone_hints = [ 2025-04-04 00:01:18.164935 | orchestrator | 00:01:18.164 STDOUT terraform:  + "nova", 2025-04-04 00:01:18.164941 | orchestrator | 00:01:18.164 STDOUT terraform:  ] 2025-04-04 00:01:18.164984 | orchestrator | 00:01:18.164 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-04 00:01:18.165022 | orchestrator | 00:01:18.164 STDOUT terraform:  + external = (known after apply) 2025-04-04 00:01:18.165061 | orchestrator | 00:01:18.165 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.165100 | orchestrator | 00:01:18.165 STDOUT terraform:  + mtu = (known after apply) 2025-04-04 00:01:18.165143 | orchestrator | 00:01:18.165 STDOUT terraform:  + name = "net-testbed-management" 2025-04-04 00:01:18.165181 | orchestrator | 00:01:18.165 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-04 00:01:18.165219 | orchestrator | 00:01:18.165 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-04 00:01:18.165262 | orchestrator | 00:01:18.165 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.165302 | orchestrator | 00:01:18.165 STDOUT terraform:  + shared = (known after apply) 2025-04-04 00:01:18.165342 | orchestrator | 00:01:18.165 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.165379 | orchestrator | 00:01:18.165 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-04-04 00:01:18.165404 | orchestrator | 00:01:18.165 STDOUT terraform:  + segments (known after apply) 2025-04-04 00:01:18.165410 | orchestrator | 00:01:18.165 STDOUT terraform:  } 2025-04-04 00:01:18.165464 | orchestrator | 00:01:18.165 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-04-04 00:01:18.165513 | orchestrator | 00:01:18.165 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-04-04 00:01:18.165552 | orchestrator | 00:01:18.165 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-04 00:01:18.165591 | orchestrator | 00:01:18.165 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-04 00:01:18.165627 | orchestrator | 00:01:18.165 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-04 00:01:18.165665 | orchestrator | 00:01:18.165 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.165704 | orchestrator | 00:01:18.165 STDOUT terraform:  + device_id = (known after apply) 2025-04-04 00:01:18.165742 | orchestrator | 00:01:18.165 STDOUT terraform:  + device_owner = (known after apply) 2025-04-04 00:01:18.165780 | orchestrator | 00:01:18.165 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-04 00:01:18.165819 | orchestrator | 00:01:18.165 STDOUT terraform:  + dns_name = (known after apply) 2025-04-04 00:01:18.165859 | orchestrator | 00:01:18.165 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.165896 | orchestrator | 00:01:18.165 STDOUT terraform:  + mac_address = (known after apply) 2025-04-04 00:01:18.165936 | orchestrator | 00:01:18.165 STDOUT terraform:  + network_id = (known after apply) 2025-04-04 00:01:18.165973 | orchestrator | 00:01:18.165 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-04 00:01:18.166010 | orchestrator | 00:01:18.165 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-04 00:01:18.166073 | orchestrator | 00:01:18.166 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.166111 | orchestrator | 00:01:18.166 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-04 00:01:18.166150 | orchestrator | 00:01:18.166 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.166171 | orchestrator | 00:01:18.166 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.166202 | orchestrator | 00:01:18.166 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-04 00:01:18.166207 | orchestrator | 00:01:18.166 STDOUT terraform:  } 2025-04-04 00:01:18.166234 | orchestrator | 00:01:18.166 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.166292 | orchestrator | 00:01:18.166 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-04 00:01:18.166321 | orchestrator | 00:01:18.166 STDOUT terraform:  } 2025-04-04 00:01:18.166327 | orchestrator | 00:01:18.166 STDOUT terraform:  + binding (known after apply) 2025-04-04 00:01:18.166359 | orchestrator | 00:01:18.166 STDOUT terraform:  + fixed_ip { 2025-04-04 00:01:18.166364 | orchestrator | 00:01:18.166 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-04-04 00:01:18.166393 | orchestrator | 00:01:18.166 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-04 00:01:18.166398 | orchestrator | 00:01:18.166 STDOUT terraform:  } 2025-04-04 00:01:18.166416 | orchestrator | 00:01:18.166 STDOUT terraform:  } 2025-04-04 00:01:18.166450 | orchestrator | 00:01:18.166 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0 2025-04-04 00:01:18.166493 | orchestrator | 00:01:18.166 STDOUT terraform: ] will be created 2025-04-04 00:01:18.166537 | orchestrator | 00:01:18.166 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-04 00:01:18.166572 | orchestrator | 00:01:18.166 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-04 00:01:18.166607 | orchestrator | 00:01:18.166 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-04 00:01:18.166640 | orchestrator | 00:01:18.166 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-04 00:01:18.166675 | orchestrator | 00:01:18.166 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.166710 | orchestrator | 00:01:18.166 STDOUT terraform:  + device_id = (known after apply) 2025-04-04 00:01:18.166745 | orchestrator | 00:01:18.166 STDOUT terraform:  + device_owner = (known after apply) 2025-04-04 00:01:18.166779 | orchestrator | 00:01:18.166 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-04 00:01:18.166814 | orchestrator | 00:01:18.166 STDOUT terraform:  + dns_name = (known after apply) 2025-04-04 00:01:18.166851 | orchestrator | 00:01:18.166 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.166885 | orchestrator | 00:01:18.166 STDOUT terraform:  + mac_address = (known after apply) 2025-04-04 00:01:18.166920 | orchestrator | 00:01:18.166 STDOUT terraform:  + network_id = (known after apply) 2025-04-04 00:01:18.166954 | orchestrator | 00:01:18.166 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-04 00:01:18.166994 | orchestrator | 00:01:18.166 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-04 00:01:18.167029 | orchestrator | 00:01:18.166 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.167065 | orchestrator | 00:01:18.167 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-04 00:01:18.167100 | orchestrator | 00:01:18.167 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.167118 | orchestrator | 00:01:18.167 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.167146 | orchestrator | 00:01:18.167 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-04 00:01:18.167151 | orchestrator | 00:01:18.167 STDOUT terraform:  } 2025-04-04 00:01:18.167174 | orchestrator | 00:01:18.167 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.167203 | orchestrator | 00:01:18.167 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-04 00:01:18.167209 | orchestrator | 00:01:18.167 STDOUT terraform:  } 2025-04-04 00:01:18.167232 | orchestrator | 00:01:18.167 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.167267 | orchestrator | 00:01:18.167 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-04 00:01:18.167273 | orchestrator | 00:01:18.167 STDOUT terraform:  } 2025-04-04 00:01:18.167296 | orchestrator | 00:01:18.167 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.167323 | orchestrator | 00:01:18.167 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-04 00:01:18.167328 | orchestrator | 00:01:18.167 STDOUT terraform:  } 2025-04-04 00:01:18.167356 | orchestrator | 00:01:18.167 STDOUT terraform:  + binding (known after apply) 2025-04-04 00:01:18.167362 | orchestrator | 00:01:18.167 STDOUT terraform:  + fixed_ip { 2025-04-04 00:01:18.167390 | orchestrator | 00:01:18.167 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-04-04 00:01:18.167418 | orchestrator | 00:01:18.167 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-04 00:01:18.167424 | orchestrator | 00:01:18.167 STDOUT terraform:  } 2025-04-04 00:01:18.167439 | orchestrator | 00:01:18.167 STDOUT terraform:  } 2025-04-04 00:01:18.167484 | orchestrator | 00:01:18.167 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-04-04 00:01:18.167528 | orchestrator | 00:01:18.167 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-04 00:01:18.167563 | orchestrator | 00:01:18.167 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-04 00:01:18.167599 | orchestrator | 00:01:18.167 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-04 00:01:18.167634 | orchestrator | 00:01:18.167 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-04 00:01:18.167668 | orchestrator | 00:01:18.167 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.167703 | orchestrator | 00:01:18.167 STDOUT terraform:  + device_id = (known after apply) 2025-04-04 00:01:18.167738 | orchestrator | 00:01:18.167 STDOUT terraform:  + device_owner = (known after apply) 2025-04-04 00:01:18.167772 | orchestrator | 00:01:18.167 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-04 00:01:18.167807 | orchestrator | 00:01:18.167 STDOUT terraform:  + dns_name = (known after apply) 2025-04-04 00:01:18.167843 | orchestrator | 00:01:18.167 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.167878 | orchestrator | 00:01:18.167 STDOUT terraform:  + mac_address = (known after apply) 2025-04-04 00:01:18.167914 | orchestrator | 00:01:18.167 STDOUT terraform:  + network_id = (known after apply) 2025-04-04 00:01:18.167947 | orchestrator | 00:01:18.167 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-04 00:01:18.167981 | orchestrator | 00:01:18.167 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-04 00:01:18.168016 | orchestrator | 00:01:18.167 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.168050 | orchestrator | 00:01:18.168 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-04 00:01:18.168085 | orchestrator | 00:01:18.168 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.168104 | orchestrator | 00:01:18.168 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.168132 | orchestrator | 00:01:18.168 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-04 00:01:18.168137 | orchestrator | 00:01:18.168 STDOUT terraform:  } 2025-04-04 00:01:18.168160 | orchestrator | 00:01:18.168 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.168189 | orchestrator | 00:01:18.168 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-04 00:01:18.168194 | orchestrator | 00:01:18.168 STDOUT terraform:  } 2025-04-04 00:01:18.168217 | orchestrator | 00:01:18.168 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.168245 | orchestrator | 00:01:18.168 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-04 00:01:18.168258 | orchestrator | 00:01:18.168 STDOUT terraform:  } 2025-04-04 00:01:18.168421 | orchestrator | 00:01:18.168 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.168493 | orchestrator | 00:01:18.168 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-04 00:01:18.168512 | orchestrator | 00:01:18.168 STDOUT terraform:  } 2025-04-04 00:01:18.168528 | orchestrator | 00:01:18.168 STDOUT terraform:  + binding (known after apply) 2025-04-04 00:01:18.168543 | orchestrator | 00:01:18.168 STDOUT terraform:  + fixed_ip { 2025-04-04 00:01:18.168558 | orchestrator | 00:01:18.168 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-04-04 00:01:18.168572 | orchestrator | 00:01:18.168 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-04 00:01:18.168586 | orchestrator | 00:01:18.168 STDOUT terraform:  } 2025-04-04 00:01:18.168612 | orchestrator | 00:01:18.168 STDOUT terraform:  } 2025-04-04 00:01:18.168627 | orchestrator | 00:01:18.168 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-04-04 00:01:18.168642 | orchestrator | 00:01:18.168 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-04 00:01:18.168672 | orchestrator | 00:01:18.168 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-04 00:01:18.168687 | orchestrator | 00:01:18.168 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-04 00:01:18.168701 | orchestrator | 00:01:18.168 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-04 00:01:18.168719 | orchestrator | 00:01:18.168 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.168773 | orchestrator | 00:01:18.168 STDOUT terraform:  + device_id = (known after apply) 2025-04-04 00:01:18.168790 | orchestrator | 00:01:18.168 STDOUT terraform:  + device_owner = (known after apply) 2025-04-04 00:01:18.168804 | orchestrator | 00:01:18.168 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-04 00:01:18.168825 | orchestrator | 00:01:18.168 STDOUT terraform:  + dns_name = (known after apply) 2025-04-04 00:01:18.168840 | orchestrator | 00:01:18.168 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.168853 | orchestrator | 00:01:18.168 STDOUT terraform:  + mac_address = (known after apply) 2025-04-04 00:01:18.168872 | orchestrator | 00:01:18.168 STDOUT terraform:  + network_id = (known after apply) 2025-04-04 00:01:18.168927 | orchestrator | 00:01:18.168 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-04 00:01:18.168952 | orchestrator | 00:01:18.168 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-04 00:01:18.168980 | orchestrator | 00:01:18.168 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.168998 | orchestrator | 00:01:18.168 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-04 00:01:18.169015 | orchestrator | 00:01:18.168 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.169032 | orchestrator | 00:01:18.169 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.169049 | orchestrator | 00:01:18.169 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-04 00:01:18.169066 | orchestrator | 00:01:18.169 STDOUT terraform:  } 2025-04-04 00:01:18.169083 | orchestrator | 00:01:18.169 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.169100 | orchestrator | 00:01:18.169 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-04 00:01:18.169117 | orchestrator | 00:01:18.169 STDOUT terraform:  } 2025-04-04 00:01:18.169134 | orchestrator | 00:01:18.169 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.169150 | orchestrator | 00:01:18.169 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-04 00:01:18.169167 | orchestrator | 00:01:18.169 STDOUT terraform:  } 2025-04-04 00:01:18.169184 | orchestrator | 00:01:18.169 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.169201 | orchestrator | 00:01:18.169 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-04 00:01:18.169218 | orchestrator | 00:01:18.169 STDOUT terraform:  } 2025-04-04 00:01:18.169236 | orchestrator | 00:01:18.169 STDOUT terraform:  + binding (known after apply) 2025-04-04 00:01:18.169281 | orchestrator | 00:01:18.169 STDOUT terraform:  + fixed_ip { 2025-04-04 00:01:18.169315 | orchestrator | 00:01:18.169 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-04-04 00:01:18.169330 | orchestrator | 00:01:18.169 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-04 00:01:18.169344 | orchestrator | 00:01:18.169 STDOUT terraform:  } 2025-04-04 00:01:18.169362 | orchestrator | 00:01:18.169 STDOUT terraform:  } 2025-04-04 00:01:18.169426 | orchestrator | 00:01:18.169 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-04-04 00:01:18.169446 | orchestrator | 00:01:18.169 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-04 00:01:18.169487 | orchestrator | 00:01:18.169 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-04 00:01:18.169506 | orchestrator | 00:01:18.169 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-04 00:01:18.169557 | orchestrator | 00:01:18.169 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-04 00:01:18.169576 | orchestrator | 00:01:18.169 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.169616 | orchestrator | 00:01:18.169 STDOUT terraform:  + device_id = (known after apply) 2025-04-04 00:01:18.169635 | orchestrator | 00:01:18.169 STDOUT terraform:  + device_owner = (known after apply) 2025-04-04 00:01:18.169689 | orchestrator | 00:01:18.169 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-04 00:01:18.169707 | orchestrator | 00:01:18.169 STDOUT terraform:  + dns_name = (known after apply) 2025-04-04 00:01:18.169758 | orchestrator | 00:01:18.169 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.169776 | orchestrator | 00:01:18.169 STDOUT terraform:  + mac_address = (known after apply) 2025-04-04 00:01:18.169818 | orchestrator | 00:01:18.169 STDOUT terraform:  + network_id = (known after apply) 2025-04-04 00:01:18.169836 | orchestrator | 00:01:18.169 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-04 00:01:18.169889 | orchestrator | 00:01:18.169 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-04 00:01:18.169907 | orchestrator | 00:01:18.169 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.169958 | orchestrator | 00:01:18.169 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-04 00:01:18.169976 | orchestrator | 00:01:18.169 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.169991 | orchestrator | 00:01:18.169 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.170005 | orchestrator | 00:01:18.169 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-04 00:01:18.170051 | orchestrator | 00:01:18.169 STDOUT terraform:  } 2025-04-04 00:01:18.170068 | orchestrator | 00:01:18.169 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.170083 | orchestrator | 00:01:18.170 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-04 00:01:18.170100 | orchestrator | 00:01:18.170 STDOUT terraform:  } 2025-04-04 00:01:18.170122 | orchestrator | 00:01:18.170 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.170137 | orchestrator | 00:01:18.170 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-04 00:01:18.170158 | orchestrator | 00:01:18.170 STDOUT terraform:  } 2025-04-04 00:01:18.170176 | orchestrator | 00:01:18.170 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.170190 | orchestrator | 00:01:18.170 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-04 00:01:18.170205 | orchestrator | 00:01:18.170 STDOUT terraform:  } 2025-04-04 00:01:18.170219 | orchestrator | 00:01:18.170 STDOUT terraform:  + binding (known after apply) 2025-04-04 00:01:18.170236 | orchestrator | 00:01:18.170 STDOUT terraform:  + fixed_ip { 2025-04-04 00:01:18.170268 | orchestrator | 00:01:18.170 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-04-04 00:01:18.170283 | orchestrator | 00:01:18.170 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-04 00:01:18.170298 | orchestrator | 00:01:18.170 STDOUT terraform:  } 2025-04-04 00:01:18.170315 | orchestrator | 00:01:18.170 STDOUT terraform:  } 2025-04-04 00:01:18.170359 | orchestrator | 00:01:18.170 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-04-04 00:01:18.170386 | orchestrator | 00:01:18.170 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-04 00:01:18.170412 | orchestrator | 00:01:18.170 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-04 00:01:18.170441 | orchestrator | 00:01:18.170 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-04 00:01:18.170486 | orchestrator | 00:01:18.170 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-04 00:01:18.170504 | orchestrator | 00:01:18.170 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.170545 | orchestrator | 00:01:18.170 STDOUT terraform:  + device_id = (known after apply) 2025-04-04 00:01:18.170563 | orchestrator | 00:01:18.170 STDOUT terraform:  + device_owner = (known after apply) 2025-04-04 00:01:18.170646 | orchestrator | 00:01:18.170 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-04 00:01:18.170669 | orchestrator | 00:01:18.170 STDOUT terraform:  + dns_name = (known after apply) 2025-04-04 00:01:18.170705 | orchestrator | 00:01:18.170 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.170720 | orchestrator | 00:01:18.170 STDOUT terraform:  + mac_address = (known after apply) 2025-04-04 00:01:18.170738 | orchestrator | 00:01:18.170 STDOUT terraform:  + network_id = (known after apply) 2025-04-04 00:01:18.170770 | orchestrator | 00:01:18.170 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-04 00:01:18.170789 | orchestrator | 00:01:18.170 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-04 00:01:18.170806 | orchestrator | 00:01:18.170 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.170844 | orchestrator | 00:01:18.170 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-04 00:01:18.170880 | orchestrator | 00:01:18.170 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.170898 | orchestrator | 00:01:18.170 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.170916 | orchestrator | 00:01:18.170 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-04 00:01:18.170949 | orchestrator | 00:01:18.170 STDOUT terraform:  } 2025-04-04 00:01:18.170969 | orchestrator | 00:01:18.170 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.170984 | orchestrator | 00:01:18.170 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-04 00:01:18.170998 | orchestrator | 00:01:18.170 STDOUT terraform:  } 2025-04-04 00:01:18.171015 | orchestrator | 00:01:18.170 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.171030 | orchestrator | 00:01:18.170 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-04 00:01:18.171044 | orchestrator | 00:01:18.171 STDOUT terraform:  } 2025-04-04 00:01:18.171061 | orchestrator | 00:01:18.171 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.171075 | orchestrator | 00:01:18.171 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-04 00:01:18.171092 | orchestrator | 00:01:18.171 STDOUT terraform:  } 2025-04-04 00:01:18.171127 | orchestrator | 00:01:18.171 STDOUT terraform:  + binding (known after apply) 2025-04-04 00:01:18.171143 | orchestrator | 00:01:18.171 STDOUT terraform:  + fixed_ip { 2025-04-04 00:01:18.171160 | orchestrator | 00:01:18.171 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-04-04 00:01:18.171174 | orchestrator | 00:01:18.171 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-04 00:01:18.171189 | orchestrator | 00:01:18.171 STDOUT terraform:  } 2025-04-04 00:01:18.171206 | orchestrator | 00:01:18.171 STDOUT terraform:  } 2025-04-04 00:01:18.171273 | orchestrator | 00:01:18.171 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-04-04 00:01:18.171293 | orchestrator | 00:01:18.171 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-04 00:01:18.171310 | orchestrator | 00:01:18.171 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-04 00:01:18.171348 | orchestrator | 00:01:18.171 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-04 00:01:18.171385 | orchestrator | 00:01:18.171 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-04 00:01:18.171422 | orchestrator | 00:01:18.171 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.171459 | orchestrator | 00:01:18.171 STDOUT terraform:  + device_id = (known after apply) 2025-04-04 00:01:18.171496 | orchestrator | 00:01:18.171 STDOUT terraform:  + device_owner = (known after apply) 2025-04-04 00:01:18.171525 | orchestrator | 00:01:18.171 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-04 00:01:18.171563 | orchestrator | 00:01:18.171 STDOUT terraform:  + dns_name = (known after apply) 2025-04-04 00:01:18.171600 | orchestrator | 00:01:18.171 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.171637 | orchestrator | 00:01:18.171 STDOUT terraform:  + mac_address = (known after apply) 2025-04-04 00:01:18.171674 | orchestrator | 00:01:18.171 STDOUT terraform:  + network_id = (known after apply) 2025-04-04 00:01:18.171703 | orchestrator | 00:01:18.171 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-04 00:01:18.171741 | orchestrator | 00:01:18.171 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-04 00:01:18.171767 | orchestrator | 00:01:18.171 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.171808 | orchestrator | 00:01:18.171 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-04 00:01:18.171850 | orchestrator | 00:01:18.171 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.171869 | orchestrator | 00:01:18.171 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.171887 | orchestrator | 00:01:18.171 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-04 00:01:18.171901 | orchestrator | 00:01:18.171 STDOUT terraform:  } 2025-04-04 00:01:18.171918 | orchestrator | 00:01:18.171 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.171935 | orchestrator | 00:01:18.171 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-04 00:01:18.171949 | orchestrator | 00:01:18.171 STDOUT terraform:  } 2025-04-04 00:01:18.171966 | orchestrator | 00:01:18.171 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.171982 | orchestrator | 00:01:18.171 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-04 00:01:18.171997 | orchestrator | 00:01:18.171 STDOUT terraform:  } 2025-04-04 00:01:18.172013 | orchestrator | 00:01:18.171 STDOUT terraform:  + allowed_address_pairs { 2025-04-04 00:01:18.172030 | orchestrator | 00:01:18.171 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-04 00:01:18.172064 | orchestrator | 00:01:18.172 STDOUT terraform:  } 2025-04-04 00:01:18.172083 | orchestrator | 00:01:18.172 STDOUT terraform:  + binding (known after apply) 2025-04-04 00:01:18.172097 | orchestrator | 00:01:18.172 STDOUT terraform:  + fixed_ip { 2025-04-04 00:01:18.172115 | orchestrator | 00:01:18.172 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-04-04 00:01:18.172134 | orchestrator | 00:01:18.172 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-04 00:01:18.172151 | orchestrator | 00:01:18.172 STDOUT terraform:  } 2025-04-04 00:01:18.172184 | orchestrator | 00:01:18.172 STDOUT terraform:  } 2025-04-04 00:01:18.172203 | orchestrator | 00:01:18.172 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-04-04 00:01:18.172231 | orchestrator | 00:01:18.172 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-04-04 00:01:18.172265 | orchestrator | 00:01:18.172 STDOUT terraform:  + force_destroy = false 2025-04-04 00:01:18.172284 | orchestrator | 00:01:18.172 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.172306 | orchestrator | 00:01:18.172 STDOUT terraform:  + port_id = (known after apply) 2025-04-04 00:01:18.172323 | orchestrator | 00:01:18.172 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.172361 | orchestrator | 00:01:18.172 STDOUT terraform:  + router_id = (known after apply) 2025-04-04 00:01:18.172379 | orchestrator | 00:01:18.172 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-04 00:01:18.172396 | orchestrator | 00:01:18.172 STDOUT terraform:  } 2025-04-04 00:01:18.172433 | orchestrator | 00:01:18.172 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-04-04 00:01:18.172458 | orchestrator | 00:01:18.172 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-04-04 00:01:18.172500 | orchestrator | 00:01:18.172 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-04 00:01:18.172535 | orchestrator | 00:01:18.172 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.172553 | orchestrator | 00:01:18.172 STDOUT terraform:  + availability_zone_hints = [ 2025-04-04 00:01:18.172570 | orchestrator | 00:01:18.172 STDOUT terraform:  + "nova", 2025-04-04 00:01:18.172609 | orchestrator | 00:01:18.172 STDOUT terraform:  ] 2025-04-04 00:01:18.172627 | orchestrator | 00:01:18.172 STDOUT terraform:  + distributed = (known after apply) 2025-04-04 00:01:18.172662 | orchestrator | 00:01:18.172 STDOUT terraform:  + enable_snat = (known after apply) 2025-04-04 00:01:18.172714 | orchestrator | 00:01:18.172 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-04-04 00:01:18.172750 | orchestrator | 00:01:18.172 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.172768 | orchestrator | 00:01:18.172 STDOUT terraform:  + name = "testbed" 2025-04-04 00:01:18.172812 | orchestrator | 00:01:18.172 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.172849 | orchestrator | 00:01:18.172 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.172866 | orchestrator | 00:01:18.172 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-04-04 00:01:18.172883 | orchestrator | 00:01:18.172 STDOUT terraform:  } 2025-04-04 00:01:18.172938 | orchestrator | 00:01:18.172 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-04-04 00:01:18.172990 | orchestrator | 00:01:18.172 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-04-04 00:01:18.173009 | orchestrator | 00:01:18.172 STDOUT terraform:  + description = "ssh" 2025-04-04 00:01:18.173026 | orchestrator | 00:01:18.172 STDOUT terraform:  + direction = "ingress" 2025-04-04 00:01:18.173043 | orchestrator | 00:01:18.173 STDOUT terraform:  + ethertype = "IPv4" 2025-04-04 00:01:18.173070 | orchestrator | 00:01:18.173 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.173088 | orchestrator | 00:01:18.173 STDOUT terraform:  + port_range_max = 22 2025-04-04 00:01:18.173105 | orchestrator | 00:01:18.173 STDOUT terraform:  + port_range_min = 22 2025-04-04 00:01:18.173122 | orchestrator | 00:01:18.173 STDOUT terraform:  + protocol = "tcp" 2025-04-04 00:01:18.173151 | orchestrator | 00:01:18.173 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.173180 | orchestrator | 00:01:18.173 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-04 00:01:18.173197 | orchestrator | 00:01:18.173 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-04 00:01:18.173232 | orchestrator | 00:01:18.173 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-04 00:01:18.173313 | orchestrator | 00:01:18.173 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.173336 | orchestrator | 00:01:18.173 STDOUT terraform:  } 2025-04-04 00:01:18.173382 | orchestrator | 00:01:18.173 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-04-04 00:01:18.173434 | orchestrator | 00:01:18.173 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-04-04 00:01:18.173452 | orchestrator | 00:01:18.173 STDOUT terraform:  + description = "wireguard" 2025-04-04 00:01:18.173469 | orchestrator | 00:01:18.173 STDOUT terraform:  + direction = "ingress" 2025-04-04 00:01:18.173487 | orchestrator | 00:01:18.173 STDOUT terraform:  + ethertype = "IPv4" 2025-04-04 00:01:18.173522 | orchestrator | 00:01:18.173 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.173540 | orchestrator | 00:01:18.173 STDOUT terraform:  + port_range_max = 51820 2025-04-04 00:01:18.173557 | orchestrator | 00:01:18.173 STDOUT terraform:  + port_range_min = 51820 2025-04-04 00:01:18.173574 | orchestrator | 00:01:18.173 STDOUT terraform:  + protocol = "udp" 2025-04-04 00:01:18.173592 | orchestrator | 00:01:18.173 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.173626 | orchestrator | 00:01:18.173 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-04 00:01:18.173644 | orchestrator | 00:01:18.173 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-04 00:01:18.173679 | orchestrator | 00:01:18.173 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-04 00:01:18.173695 | orchestrator | 00:01:18.173 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.173710 | orchestrator | 00:01:18.173 STDOUT terraform:  } 2025-04-04 00:01:18.173764 | orchestrator | 00:01:18.173 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-04-04 00:01:18.173817 | orchestrator | 00:01:18.173 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-04-04 00:01:18.173833 | orchestrator | 00:01:18.173 STDOUT terraform:  + direction = "ingress" 2025-04-04 00:01:18.173848 | orchestrator | 00:01:18.173 STDOUT terraform:  + ethertype = "IPv4" 2025-04-04 00:01:18.173884 | orchestrator | 00:01:18.173 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.173900 | orchestrator | 00:01:18.173 STDOUT terraform:  + protocol = "tcp" 2025-04-04 00:01:18.173932 | orchestrator | 00:01:18.173 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.173948 | orchestrator | 00:01:18.173 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-04 00:01:18.173987 | orchestrator | 00:01:18.173 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-04 00:01:18.174050 | orchestrator | 00:01:18.173 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-04 00:01:18.174069 | orchestrator | 00:01:18.174 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.174084 | orchestrator | 00:01:18.174 STDOUT terraform:  } 2025-04-04 00:01:18.174146 | orchestrator | 00:01:18.174 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-04-04 00:01:18.174200 | orchestrator | 00:01:18.174 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-04-04 00:01:18.174223 | orchestrator | 00:01:18.174 STDOUT terraform:  + direction = "ingress" 2025-04-04 00:01:18.174238 | orchestrator | 00:01:18.174 STDOUT terraform:  + ethertype = "IPv4" 2025-04-04 00:01:18.174294 | orchestrator | 00:01:18.174 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.174312 | orchestrator | 00:01:18.174 STDOUT terraform:  + protocol = "udp" 2025-04-04 00:01:18.174328 | orchestrator | 00:01:18.174 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.174361 | orchestrator | 00:01:18.174 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-04 00:01:18.174386 | orchestrator | 00:01:18.174 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-04 00:01:18.174419 | orchestrator | 00:01:18.174 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-04 00:01:18.174435 | orchestrator | 00:01:18.174 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.174450 | orchestrator | 00:01:18.174 STDOUT terraform:  } 2025-04-04 00:01:18.174503 | orchestrator | 00:01:18.174 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-04-04 00:01:18.174556 | orchestrator | 00:01:18.174 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-04-04 00:01:18.174578 | orchestrator | 00:01:18.174 STDOUT terraform:  + direction = "ingress" 2025-04-04 00:01:18.174593 | orchestrator | 00:01:18.174 STDOUT terraform:  + ethertype = "IPv4" 2025-04-04 00:01:18.174627 | orchestrator | 00:01:18.174 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.174643 | orchestrator | 00:01:18.174 STDOUT terraform:  + protocol = "icmp" 2025-04-04 00:01:18.174666 | orchestrator | 00:01:18.174 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.174700 | orchestrator | 00:01:18.174 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-04 00:01:18.174717 | orchestrator | 00:01:18.174 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-04 00:01:18.174751 | orchestrator | 00:01:18.174 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-04 00:01:18.174778 | orchestrator | 00:01:18.174 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.174793 | orchestrator | 00:01:18.174 STDOUT terraform:  } 2025-04-04 00:01:18.174840 | orchestrator | 00:01:18.174 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-04-04 00:01:18.174891 | orchestrator | 00:01:18.174 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-04-04 00:01:18.174908 | orchestrator | 00:01:18.174 STDOUT terraform:  + direction = "ingress" 2025-04-04 00:01:18.174923 | orchestrator | 00:01:18.174 STDOUT terraform:  + ethertype = "IPv4" 2025-04-04 00:01:18.174959 | orchestrator | 00:01:18.174 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.174975 | orchestrator | 00:01:18.174 STDOUT terraform:  + protocol = "tcp" 2025-04-04 00:01:18.175008 | orchestrator | 00:01:18.174 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.175033 | orchestrator | 00:01:18.174 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-04 00:01:18.175048 | orchestrator | 00:01:18.175 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-04 00:01:18.175080 | orchestrator | 00:01:18.175 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-04 00:01:18.175113 | orchestrator | 00:01:18.175 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.175166 | orchestrator | 00:01:18.175 STDOUT terraform:  } 2025-04-04 00:01:18.175182 | orchestrator | 00:01:18.175 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-04-04 00:01:18.175218 | orchestrator | 00:01:18.175 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-04-04 00:01:18.175264 | orchestrator | 00:01:18.175 STDOUT terraform:  + direction = "ingress" 2025-04-04 00:01:18.175281 | orchestrator | 00:01:18.175 STDOUT terraform:  + ethertype = "IPv4" 2025-04-04 00:01:18.175296 | orchestrator | 00:01:18.175 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.175321 | orchestrator | 00:01:18.175 STDOUT terraform:  + protocol = "udp" 2025-04-04 00:01:18.175354 | orchestrator | 00:01:18.175 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.175372 | orchestrator | 00:01:18.175 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-04 00:01:18.175405 | orchestrator | 00:01:18.175 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-04 00:01:18.175430 | orchestrator | 00:01:18.175 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-04 00:01:18.175464 | orchestrator | 00:01:18.175 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.175519 | orchestrator | 00:01:18.175 STDOUT terraform:  } 2025-04-04 00:01:18.175536 | orchestrator | 00:01:18.175 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-04-04 00:01:18.175572 | orchestrator | 00:01:18.175 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-04-04 00:01:18.175588 | orchestrator | 00:01:18.175 STDOUT terraform:  + direction = "ingress" 2025-04-04 00:01:18.175604 | orchestrator | 00:01:18.175 STDOUT terraform:  + ethertype = "IPv4" 2025-04-04 00:01:18.175645 | orchestrator | 00:01:18.175 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.175662 | orchestrator | 00:01:18.175 STDOUT terraform:  + protocol = "icmp" 2025-04-04 00:01:18.175685 | orchestrator | 00:01:18.175 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.175718 | orchestrator | 00:01:18.175 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-04 00:01:18.175734 | orchestrator | 00:01:18.175 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-04 00:01:18.175768 | orchestrator | 00:01:18.175 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-04 00:01:18.175793 | orchestrator | 00:01:18.175 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.175809 | orchestrator | 00:01:18.175 STDOUT terraform:  } 2025-04-04 00:01:18.175856 | orchestrator | 00:01:18.175 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-04-04 00:01:18.175906 | orchestrator | 00:01:18.175 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-04-04 00:01:18.175922 | orchestrator | 00:01:18.175 STDOUT terraform:  + description = "vrrp" 2025-04-04 00:01:18.175937 | orchestrator | 00:01:18.175 STDOUT terraform:  + direction = "ingress" 2025-04-04 00:01:18.175952 | orchestrator | 00:01:18.175 STDOUT terraform:  + ethertype = "IPv4" 2025-04-04 00:01:18.175990 | orchestrator | 00:01:18.175 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.176006 | orchestrator | 00:01:18.175 STDOUT terraform:  + protocol = "112" 2025-04-04 00:01:18.176026 | orchestrator | 00:01:18.175 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.176061 | orchestrator | 00:01:18.176 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-04 00:01:18.176078 | orchestrator | 00:01:18.176 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-04 00:01:18.176111 | orchestrator | 00:01:18.176 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-04 00:01:18.176145 | orchestrator | 00:01:18.176 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.176196 | orchestrator | 00:01:18.176 STDOUT terraform:  } 2025-04-04 00:01:18.176212 | orchestrator | 00:01:18.176 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-04-04 00:01:18.176247 | orchestrator | 00:01:18.176 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-04-04 00:01:18.176278 | orchestrator | 00:01:18.176 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.176312 | orchestrator | 00:01:18.176 STDOUT terraform:  + description = "management security group" 2025-04-04 00:01:18.176329 | orchestrator | 00:01:18.176 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.176364 | orchestrator | 00:01:18.176 STDOUT terraform:  + name = "testbed-management" 2025-04-04 00:01:18.176380 | orchestrator | 00:01:18.176 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.176415 | orchestrator | 00:01:18.176 STDOUT terraform:  + stateful = (known after apply) 2025-04-04 00:01:18.176431 | orchestrator | 00:01:18.176 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.176446 | orchestrator | 00:01:18.176 STDOUT terraform:  } 2025-04-04 00:01:18.176494 | orchestrator | 00:01:18.176 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-04-04 00:01:18.176539 | orchestrator | 00:01:18.176 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-04-04 00:01:18.176555 | orchestrator | 00:01:18.176 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.176590 | orchestrator | 00:01:18.176 STDOUT terraform:  + description = "node security group" 2025-04-04 00:01:18.176607 | orchestrator | 00:01:18.176 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.176639 | orchestrator | 00:01:18.176 STDOUT terraform:  + name = "testbed-node" 2025-04-04 00:01:18.176655 | orchestrator | 00:01:18.176 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.176694 | orchestrator | 00:01:18.176 STDOUT terraform:  + stateful = (known after apply) 2025-04-04 00:01:18.176710 | orchestrator | 00:01:18.176 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.176725 | orchestrator | 00:01:18.176 STDOUT terraform:  } 2025-04-04 00:01:18.176766 | orchestrator | 00:01:18.176 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-04-04 00:01:18.176809 | orchestrator | 00:01:18.176 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-04-04 00:01:18.176843 | orchestrator | 00:01:18.176 STDOUT terraform:  + all_tags = (known after apply) 2025-04-04 00:01:18.176859 | orchestrator | 00:01:18.176 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-04-04 00:01:18.176874 | orchestrator | 00:01:18.176 STDOUT terraform:  + dns_nameservers = [ 2025-04-04 00:01:18.176889 | orchestrator | 00:01:18.176 STDOUT terraform:  + "8.8.8.8", 2025-04-04 00:01:18.176904 | orchestrator | 00:01:18.176 STDOUT terraform:  + "9.9.9.9", 2025-04-04 00:01:18.176920 | orchestrator | 00:01:18.176 STDOUT terraform:  ] 2025-04-04 00:01:18.176935 | orchestrator | 00:01:18.176 STDOUT terraform:  + enable_dhcp = true 2025-04-04 00:01:18.176950 | orchestrator | 00:01:18.176 STDOUT terraform:  + gateway_ip = (known after apply) 2025-04-04 00:01:18.176988 | orchestrator | 00:01:18.176 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.177005 | orchestrator | 00:01:18.176 STDOUT terraform:  + ip_version = 4 2025-04-04 00:01:18.177031 | orchestrator | 00:01:18.176 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-04-04 00:01:18.177062 | orchestrator | 00:01:18.177 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-04-04 00:01:18.177101 | orchestrator | 00:01:18.177 STDOUT terraform:  + name = "subnet-testbed-management" 2025-04-04 00:01:18.177127 | orchestrator | 00:01:18.177 STDOUT terraform:  + network_id = (known after apply) 2025-04-04 00:01:18.177142 | orchestrator | 00:01:18.177 STDOUT terraform:  + no_gateway = false 2025-04-04 00:01:18.177177 | orchestrator | 00:01:18.177 STDOUT terraform:  + region = (known after apply) 2025-04-04 00:01:18.177210 | orchestrator | 00:01:18.177 STDOUT terraform:  + service_types = (known after apply) 2025-04-04 00:01:18.177226 | orchestrator | 00:01:18.177 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-04 00:01:18.177241 | orchestrator | 00:01:18.177 STDOUT terraform:  + allocation_pool { 2025-04-04 00:01:18.177287 | orchestrator | 00:01:18.177 STDOUT terraform:  + end = "192.168.31.250" 2025-04-04 00:01:18.177303 | orchestrator | 00:01:18.177 STDOUT terraform:  + start = "192.168.31.200" 2025-04-04 00:01:18.177318 | orchestrator | 00:01:18.177 STDOUT terraform:  } 2025-04-04 00:01:18.177394 | orchestrator | 00:01:18.177 STDOUT terraform:  } 2025-04-04 00:01:18.177411 | orchestrator | 00:01:18.177 STDOUT terraform:  # terraform_data.image will be created 2025-04-04 00:01:18.177433 | orchestrator | 00:01:18.177 STDOUT terraform:  + resource "terraform_data" "image" { 2025-04-04 00:01:18.177438 | orchestrator | 00:01:18.177 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.177447 | orchestrator | 00:01:18.177 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-04 00:01:18.177453 | orchestrator | 00:01:18.177 STDOUT terraform:  + output = (known after apply) 2025-04-04 00:01:18.177469 | orchestrator | 00:01:18.177 STDOUT terraform:  } 2025-04-04 00:01:18.177475 | orchestrator | 00:01:18.177 STDOUT terraform:  # terraform_data.image_node will be created 2025-04-04 00:01:18.177498 | orchestrator | 00:01:18.177 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-04-04 00:01:18.177520 | orchestrator | 00:01:18.177 STDOUT terraform:  + id = (known after apply) 2025-04-04 00:01:18.177540 | orchestrator | 00:01:18.177 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-04 00:01:18.177563 | orchestrator | 00:01:18.177 STDOUT terraform:  + output = (known after apply) 2025-04-04 00:01:18.177576 | orchestrator | 00:01:18.177 STDOUT terraform:  } 2025-04-04 00:01:18.177605 | orchestrator | 00:01:18.177 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-04-04 00:01:18.177618 | orchestrator | 00:01:18.177 STDOUT terraform: Changes to Outputs: 2025-04-04 00:01:18.177642 | orchestrator | 00:01:18.177 STDOUT terraform:  + manager_address = (sensitive value) 2025-04-04 00:01:18.177665 | orchestrator | 00:01:18.177 STDOUT terraform:  + private_key = (sensitive value) 2025-04-04 00:01:18.338867 | orchestrator | 00:01:18.338 STDOUT terraform: terraform_data.image_node: Creating... 2025-04-04 00:01:18.339400 | orchestrator | 00:01:18.338 STDOUT terraform: terraform_data.image: Creating... 2025-04-04 00:01:18.339437 | orchestrator | 00:01:18.338 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=9decc4f5-9291-ec91-939a-119fd90da376] 2025-04-04 00:01:18.351376 | orchestrator | 00:01:18.339 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=76e523b1-1e22-26f2-d9d4-23be250e704f] 2025-04-04 00:01:18.351433 | orchestrator | 00:01:18.351 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-04-04 00:01:18.353535 | orchestrator | 00:01:18.353 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-04-04 00:01:18.358294 | orchestrator | 00:01:18.358 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-04-04 00:01:18.359130 | orchestrator | 00:01:18.358 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-04-04 00:01:18.360572 | orchestrator | 00:01:18.360 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-04-04 00:01:18.361112 | orchestrator | 00:01:18.360 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-04-04 00:01:18.362748 | orchestrator | 00:01:18.362 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-04-04 00:01:18.363160 | orchestrator | 00:01:18.363 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-04-04 00:01:18.363444 | orchestrator | 00:01:18.363 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-04-04 00:01:18.364820 | orchestrator | 00:01:18.364 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-04-04 00:01:18.776147 | orchestrator | 00:01:18.775 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-04 00:01:18.783452 | orchestrator | 00:01:18.783 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-04-04 00:01:18.786227 | orchestrator | 00:01:18.785 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-04 00:01:18.792956 | orchestrator | 00:01:18.792 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-04-04 00:01:18.905706 | orchestrator | 00:01:18.905 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-04-04 00:01:18.910276 | orchestrator | 00:01:18.910 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-04-04 00:01:24.208976 | orchestrator | 00:01:24.208 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=7b16a97d-e02f-4996-a8fb-9cb506c118f9] 2025-04-04 00:01:24.216541 | orchestrator | 00:01:24.216 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-04-04 00:01:28.360853 | orchestrator | 00:01:28.360 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-04-04 00:01:28.364070 | orchestrator | 00:01:28.363 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-04-04 00:01:28.364193 | orchestrator | 00:01:28.363 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-04-04 00:01:28.366440 | orchestrator | 00:01:28.366 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-04-04 00:01:28.366509 | orchestrator | 00:01:28.366 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-04-04 00:01:28.366809 | orchestrator | 00:01:28.366 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-04-04 00:01:28.783222 | orchestrator | 00:01:28.782 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-04-04 00:01:28.793273 | orchestrator | 00:01:28.793 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-04-04 00:01:28.910584 | orchestrator | 00:01:28.910 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-04-04 00:01:28.958544 | orchestrator | 00:01:28.958 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=3de80298-c30a-4a56-a2b0-629a93b61883] 2025-04-04 00:01:28.968810 | orchestrator | 00:01:28.968 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-04-04 00:01:28.975237 | orchestrator | 00:01:28.975 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 11s [id=47cf353f-9933-4506-be1a-6fb18a4a07e9] 2025-04-04 00:01:28.984787 | orchestrator | 00:01:28.984 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-04-04 00:01:28.997033 | orchestrator | 00:01:28.996 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=3e7839f5-850f-45cb-a036-1432019fe132] 2025-04-04 00:01:29.002776 | orchestrator | 00:01:29.002 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-04-04 00:01:29.017924 | orchestrator | 00:01:29.017 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 11s [id=2b3ca686-3ae7-468f-a1a4-0ae990752473] 2025-04-04 00:01:29.027326 | orchestrator | 00:01:29.027 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-04-04 00:01:29.029685 | orchestrator | 00:01:29.029 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=d452a44b-aae6-4065-a78c-9a36ae27c0a3] 2025-04-04 00:01:29.036504 | orchestrator | 00:01:29.036 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-04-04 00:01:29.054648 | orchestrator | 00:01:29.054 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=adf28255-e106-45eb-9832-34929d832a0d] 2025-04-04 00:01:29.060800 | orchestrator | 00:01:29.060 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-04-04 00:01:29.072987 | orchestrator | 00:01:29.072 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=265f820c-5a3e-430e-ae5a-4674d09369cc] 2025-04-04 00:01:29.080871 | orchestrator | 00:01:29.080 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-04-04 00:01:29.105657 | orchestrator | 00:01:29.105 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 10s [id=5f8b0337-6fe2-4311-94ea-5b7abe02e48e] 2025-04-04 00:01:29.111048 | orchestrator | 00:01:29.110 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-04-04 00:01:29.120180 | orchestrator | 00:01:29.120 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=07b46a09-8c2f-4f65-be68-ae4e772e446d] 2025-04-04 00:01:29.129960 | orchestrator | 00:01:29.129 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-04-04 00:01:34.217426 | orchestrator | 00:01:34.217 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-04-04 00:01:34.384618 | orchestrator | 00:01:34.384 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=40370202-6a2d-4119-85f5-057a26d35c03] 2025-04-04 00:01:34.392998 | orchestrator | 00:01:34.392 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-04-04 00:01:38.970106 | orchestrator | 00:01:38.969 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-04-04 00:01:38.985965 | orchestrator | 00:01:38.985 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-04-04 00:01:39.003180 | orchestrator | 00:01:39.002 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-04-04 00:01:39.028709 | orchestrator | 00:01:39.028 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-04-04 00:01:39.036753 | orchestrator | 00:01:39.036 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-04-04 00:01:39.061994 | orchestrator | 00:01:39.061 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-04-04 00:01:39.081378 | orchestrator | 00:01:39.081 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-04-04 00:01:39.111543 | orchestrator | 00:01:39.111 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-04-04 00:01:39.130744 | orchestrator | 00:01:39.130 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-04-04 00:01:39.162314 | orchestrator | 00:01:39.161 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=3c72412b-1c42-4489-963e-1990a8f04f17] 2025-04-04 00:01:39.174388 | orchestrator | 00:01:39.174 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-04-04 00:01:39.199928 | orchestrator | 00:01:39.199 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 10s [id=d959f59d-34fb-41af-b696-545de6cad1c5] 2025-04-04 00:01:39.214301 | orchestrator | 00:01:39.214 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-04-04 00:01:39.223925 | orchestrator | 00:01:39.223 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=9bee64ad-b5e3-4230-9d8c-a8a301110b73] 2025-04-04 00:01:39.229897 | orchestrator | 00:01:39.229 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-04-04 00:01:39.242895 | orchestrator | 00:01:39.242 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 10s [id=84918494-8543-4979-af29-701b77c4e956] 2025-04-04 00:01:39.246812 | orchestrator | 00:01:39.246 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=4a1a9f2a-46e3-417d-bac7-2b764837f325] 2025-04-04 00:01:39.251877 | orchestrator | 00:01:39.251 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-04-04 00:01:39.253195 | orchestrator | 00:01:39.253 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-04-04 00:01:39.276688 | orchestrator | 00:01:39.276 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=8c50131b-2e20-4199-9321-7d2c38984480] 2025-04-04 00:01:39.291370 | orchestrator | 00:01:39.291 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-04-04 00:01:39.295839 | orchestrator | 00:01:39.295 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=c31854c3760f4ac37ab293c82d0c4f9d3819b400] 2025-04-04 00:01:39.300480 | orchestrator | 00:01:39.300 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 10s [id=b8c398ed-0e41-4b9f-9814-6176b4164583] 2025-04-04 00:01:39.312022 | orchestrator | 00:01:39.311 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-04-04 00:01:39.315211 | orchestrator | 00:01:39.315 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-04-04 00:01:39.317119 | orchestrator | 00:01:39.316 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=7e1bb25e-090b-4c97-add7-925cccadf2fe] 2025-04-04 00:01:39.320146 | orchestrator | 00:01:39.319 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=e0f7f43701d40826953eebaeff74edaaa0667d68] 2025-04-04 00:01:39.463956 | orchestrator | 00:01:39.463 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=e7ceaeb8-d51b-4add-b51e-61c92b64ddb7] 2025-04-04 00:01:44.394601 | orchestrator | 00:01:44.394 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-04-04 00:01:44.702590 | orchestrator | 00:01:44.702 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=2d6bb235-b4a5-4107-827e-9430e4ec8db1] 2025-04-04 00:01:45.088902 | orchestrator | 00:01:45.088 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=4683eb8c-f9fc-4132-8ec6-01e930db51c2] 2025-04-04 00:01:45.095901 | orchestrator | 00:01:45.095 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-04-04 00:01:49.175273 | orchestrator | 00:01:49.174 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-04-04 00:01:49.215367 | orchestrator | 00:01:49.215 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-04-04 00:01:49.230645 | orchestrator | 00:01:49.230 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-04-04 00:01:49.253203 | orchestrator | 00:01:49.252 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-04-04 00:01:49.254432 | orchestrator | 00:01:49.254 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-04-04 00:01:49.509077 | orchestrator | 00:01:49.508 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=f69ee639-f0f9-41e2-80c5-4bc328a7a043] 2025-04-04 00:01:49.551467 | orchestrator | 00:01:49.551 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=20ca7b9e-ccd1-48dd-bac0-d92389f96c9e] 2025-04-04 00:01:49.582892 | orchestrator | 00:01:49.582 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=cf831ac4-ec72-4e5f-9ce6-19359424a886] 2025-04-04 00:01:49.621497 | orchestrator | 00:01:49.621 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=d5629210-3dee-462b-b636-658d0d85107d] 2025-04-04 00:01:49.623473 | orchestrator | 00:01:49.623 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=3aa6ea8b-0579-4a55-b42a-d9feea6f29a9] 2025-04-04 00:01:52.230650 | orchestrator | 00:01:52.230 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=5283aed3-d622-49b5-885d-ca5a6dc7c62b] 2025-04-04 00:01:52.238792 | orchestrator | 00:01:52.238 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-04-04 00:01:52.240855 | orchestrator | 00:01:52.240 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-04-04 00:01:52.240939 | orchestrator | 00:01:52.240 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-04-04 00:01:52.371366 | orchestrator | 00:01:52.370 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=211e0dcb-452b-4955-8358-8ae2e174c766] 2025-04-04 00:01:52.382870 | orchestrator | 00:01:52.382 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-04-04 00:01:52.389873 | orchestrator | 00:01:52.389 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-04-04 00:01:52.389927 | orchestrator | 00:01:52.389 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-04-04 00:01:52.389991 | orchestrator | 00:01:52.389 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-04-04 00:01:52.392926 | orchestrator | 00:01:52.392 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-04-04 00:01:52.392983 | orchestrator | 00:01:52.392 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-04-04 00:01:52.414539 | orchestrator | 00:01:52.414 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=1c366904-a1f7-413e-b037-9d9ca818c131] 2025-04-04 00:01:52.420006 | orchestrator | 00:01:52.419 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-04-04 00:01:52.421056 | orchestrator | 00:01:52.420 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-04-04 00:01:52.421877 | orchestrator | 00:01:52.421 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-04-04 00:01:52.511826 | orchestrator | 00:01:52.511 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=c7bb1f04-7e2a-4040-800a-98f4a17e0252] 2025-04-04 00:01:52.525086 | orchestrator | 00:01:52.524 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-04-04 00:01:52.590922 | orchestrator | 00:01:52.590 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=2da070f3-ef69-4796-ba8f-d234d269ea41] 2025-04-04 00:01:52.604147 | orchestrator | 00:01:52.603 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-04-04 00:01:52.631378 | orchestrator | 00:01:52.631 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=250b05fc-a62a-4c6a-9535-6bddc0b63b9d] 2025-04-04 00:01:52.646350 | orchestrator | 00:01:52.646 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-04-04 00:01:52.715003 | orchestrator | 00:01:52.714 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=60dadad7-6bd1-4ad0-82af-41419c585bcd] 2025-04-04 00:01:52.727470 | orchestrator | 00:01:52.727 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-04-04 00:01:52.798296 | orchestrator | 00:01:52.797 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=c6b5d88a-47aa-42ee-93dd-f42c173a1e6f] 2025-04-04 00:01:52.812527 | orchestrator | 00:01:52.812 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-04-04 00:01:52.835709 | orchestrator | 00:01:52.835 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=8bfa8067-2989-4729-9c83-eb8fe71813b1] 2025-04-04 00:01:52.847790 | orchestrator | 00:01:52.847 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-04-04 00:01:52.977281 | orchestrator | 00:01:52.976 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=8855ad42-95b4-40f5-b98f-495f1d0b1cd7] 2025-04-04 00:01:52.983701 | orchestrator | 00:01:52.983 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-04-04 00:01:53.120164 | orchestrator | 00:01:53.119 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=ec6ce172-d1f8-43de-abe8-411714063f25] 2025-04-04 00:01:53.135884 | orchestrator | 00:01:53.135 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=c981b639-f731-44b9-be9e-967440158877] 2025-04-04 00:01:58.189568 | orchestrator | 00:01:58.189 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=d92b05ad-ef13-48d4-99c8-9fe65b6d37e4] 2025-04-04 00:01:58.281065 | orchestrator | 00:01:58.280 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 5s [id=38d30e5b-650e-4598-b5fe-96e9570c9a66] 2025-04-04 00:01:58.541370 | orchestrator | 00:01:58.540 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=b56fb027-96ef-4285-baf2-cd528873fb18] 2025-04-04 00:01:58.637403 | orchestrator | 00:01:58.637 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=711e421e-789b-40f4-b094-1da6812c8afe] 2025-04-04 00:01:58.881191 | orchestrator | 00:01:58.880 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=ab132a81-d8b0-49b5-b95b-2d4094468ee3] 2025-04-04 00:01:58.982380 | orchestrator | 00:01:58.982 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=cf5efb77-bf08-40d5-931b-059849d8f8d0] 2025-04-04 00:01:59.113787 | orchestrator | 00:01:59.113 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=cc867b26-532c-4628-b775-e30318730bfc] 2025-04-04 00:01:59.257882 | orchestrator | 00:01:59.257 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=79557ce7-2c24-4b99-a1fc-50cfd261e684] 2025-04-04 00:01:59.281341 | orchestrator | 00:01:59.281 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-04-04 00:01:59.295078 | orchestrator | 00:01:59.294 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-04-04 00:01:59.301516 | orchestrator | 00:01:59.301 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-04-04 00:01:59.301805 | orchestrator | 00:01:59.301 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-04-04 00:01:59.311129 | orchestrator | 00:01:59.311 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-04-04 00:01:59.312191 | orchestrator | 00:01:59.312 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-04-04 00:01:59.312432 | orchestrator | 00:01:59.312 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-04-04 00:02:06.352116 | orchestrator | 00:02:06.351 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=edface51-9e73-46c0-8830-a56a44a279ec] 2025-04-04 00:02:06.365045 | orchestrator | 00:02:06.364 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-04-04 00:02:06.370471 | orchestrator | 00:02:06.370 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-04-04 00:02:06.370614 | orchestrator | 00:02:06.370 STDOUT terraform: local_file.inventory: Creating... 2025-04-04 00:02:06.374115 | orchestrator | 00:02:06.373 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=41ca8d41c154eaddb634438e1cf1bb892201eb4f] 2025-04-04 00:02:06.374817 | orchestrator | 00:02:06.374 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=7aa5fb3cf621ed7fa207d8eb62814b64ba55c68b] 2025-04-04 00:02:06.907858 | orchestrator | 00:02:06.907 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=edface51-9e73-46c0-8830-a56a44a279ec] 2025-04-04 00:02:09.298435 | orchestrator | 00:02:09.298 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-04-04 00:02:09.309700 | orchestrator | 00:02:09.309 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-04-04 00:02:09.310873 | orchestrator | 00:02:09.310 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-04-04 00:02:09.313187 | orchestrator | 00:02:09.313 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-04-04 00:02:09.316491 | orchestrator | 00:02:09.316 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-04-04 00:02:09.316617 | orchestrator | 00:02:09.316 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-04-04 00:02:19.299556 | orchestrator | 00:02:19.299 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-04-04 00:02:19.310647 | orchestrator | 00:02:19.310 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-04-04 00:02:19.311762 | orchestrator | 00:02:19.311 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-04-04 00:02:19.313976 | orchestrator | 00:02:19.313 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-04-04 00:02:19.317478 | orchestrator | 00:02:19.317 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-04-04 00:02:19.317615 | orchestrator | 00:02:19.317 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-04-04 00:02:19.595988 | orchestrator | 00:02:19.595 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=29ab7e65-2ea2-4dbf-8b43-f66487bcf943] 2025-04-04 00:02:19.695492 | orchestrator | 00:02:19.695 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=31e493e2-152d-4e62-9b7a-241811eb2eab] 2025-04-04 00:02:19.793395 | orchestrator | 00:02:19.793 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=8cad1a1b-0c91-4f9e-9f73-c0ce2aef4dc9] 2025-04-04 00:02:29.315420 | orchestrator | 00:02:29.315 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-04-04 00:02:29.318480 | orchestrator | 00:02:29.318 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-04-04 00:02:29.318616 | orchestrator | 00:02:29.318 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-04-04 00:02:29.865793 | orchestrator | 00:02:29.865 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=2827b1bd-14c7-4f09-abc6-0f0a8f931310] 2025-04-04 00:02:30.083595 | orchestrator | 00:02:30.083 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=12d8ee8b-d01f-4959-b369-69ac972b0b3e] 2025-04-04 00:02:30.333530 | orchestrator | 00:02:30.333 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=a45aa7a0-8093-4d40-9806-bc422d6d9d92] 2025-04-04 00:02:30.366859 | orchestrator | 00:02:30.366 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-04-04 00:02:30.369062 | orchestrator | 00:02:30.368 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-04-04 00:02:30.370649 | orchestrator | 00:02:30.370 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-04-04 00:02:30.372783 | orchestrator | 00:02:30.372 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-04-04 00:02:30.375057 | orchestrator | 00:02:30.374 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-04-04 00:02:30.378074 | orchestrator | 00:02:30.377 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-04-04 00:02:30.388560 | orchestrator | 00:02:30.388 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-04-04 00:02:30.389002 | orchestrator | 00:02:30.388 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-04-04 00:02:30.391334 | orchestrator | 00:02:30.391 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-04-04 00:02:30.394582 | orchestrator | 00:02:30.394 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=4533898976382898435] 2025-04-04 00:02:30.406951 | orchestrator | 00:02:30.406 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-04-04 00:02:30.417569 | orchestrator | 00:02:30.417 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-04-04 00:02:35.687953 | orchestrator | 00:02:35.687 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=29ab7e65-2ea2-4dbf-8b43-f66487bcf943/3c72412b-1c42-4489-963e-1990a8f04f17] 2025-04-04 00:02:35.708369 | orchestrator | 00:02:35.707 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 6s [id=12d8ee8b-d01f-4959-b369-69ac972b0b3e/47cf353f-9933-4506-be1a-6fb18a4a07e9] 2025-04-04 00:02:35.712080 | orchestrator | 00:02:35.711 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-04-04 00:02:35.716780 | orchestrator | 00:02:35.716 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-04-04 00:02:35.729860 | orchestrator | 00:02:35.729 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=a45aa7a0-8093-4d40-9806-bc422d6d9d92/07b46a09-8c2f-4f65-be68-ae4e772e446d] 2025-04-04 00:02:35.736083 | orchestrator | 00:02:35.735 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-04-04 00:02:35.753684 | orchestrator | 00:02:35.753 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=31e493e2-152d-4e62-9b7a-241811eb2eab/3e7839f5-850f-45cb-a036-1432019fe132] 2025-04-04 00:02:35.754387 | orchestrator | 00:02:35.754 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=12d8ee8b-d01f-4959-b369-69ac972b0b3e/adf28255-e106-45eb-9832-34929d832a0d] 2025-04-04 00:02:35.765785 | orchestrator | 00:02:35.765 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-04-04 00:02:35.768390 | orchestrator | 00:02:35.768 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-04-04 00:02:35.774635 | orchestrator | 00:02:35.774 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 6s [id=2827b1bd-14c7-4f09-abc6-0f0a8f931310/5f8b0337-6fe2-4311-94ea-5b7abe02e48e] 2025-04-04 00:02:35.778047 | orchestrator | 00:02:35.777 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=12d8ee8b-d01f-4959-b369-69ac972b0b3e/3de80298-c30a-4a56-a2b0-629a93b61883] 2025-04-04 00:02:35.790129 | orchestrator | 00:02:35.789 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 6s [id=29ab7e65-2ea2-4dbf-8b43-f66487bcf943/b8c398ed-0e41-4b9f-9814-6176b4164583] 2025-04-04 00:02:35.792414 | orchestrator | 00:02:35.792 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-04-04 00:02:35.802120 | orchestrator | 00:02:35.801 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-04-04 00:02:35.802842 | orchestrator | 00:02:35.802 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-04-04 00:02:35.809593 | orchestrator | 00:02:35.809 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 6s [id=2827b1bd-14c7-4f09-abc6-0f0a8f931310/7e1bb25e-090b-4c97-add7-925cccadf2fe] 2025-04-04 00:02:35.824317 | orchestrator | 00:02:35.824 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-04-04 00:02:38.896107 | orchestrator | 00:02:38.895 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 9s [id=29ab7e65-2ea2-4dbf-8b43-f66487bcf943/d959f59d-34fb-41af-b696-545de6cad1c5] 2025-04-04 00:02:41.086227 | orchestrator | 00:02:41.085 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=8cad1a1b-0c91-4f9e-9f73-c0ce2aef4dc9/4a1a9f2a-46e3-417d-bac7-2b764837f325] 2025-04-04 00:02:41.096316 | orchestrator | 00:02:41.095 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=31e493e2-152d-4e62-9b7a-241811eb2eab/84918494-8543-4979-af29-701b77c4e956] 2025-04-04 00:02:41.112321 | orchestrator | 00:02:41.111 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 5s [id=a45aa7a0-8093-4d40-9806-bc422d6d9d92/9bee64ad-b5e3-4230-9d8c-a8a301110b73] 2025-04-04 00:02:41.117962 | orchestrator | 00:02:41.117 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=31e493e2-152d-4e62-9b7a-241811eb2eab/265f820c-5a3e-430e-ae5a-4674d09369cc] 2025-04-04 00:02:41.126066 | orchestrator | 00:02:41.125 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=2827b1bd-14c7-4f09-abc6-0f0a8f931310/d452a44b-aae6-4065-a78c-9a36ae27c0a3] 2025-04-04 00:02:41.155441 | orchestrator | 00:02:41.154 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 5s [id=a45aa7a0-8093-4d40-9806-bc422d6d9d92/40370202-6a2d-4119-85f5-057a26d35c03] 2025-04-04 00:02:41.545224 | orchestrator | 00:02:41.544 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=8cad1a1b-0c91-4f9e-9f73-c0ce2aef4dc9/8c50131b-2e20-4199-9321-7d2c38984480] 2025-04-04 00:02:41.724117 | orchestrator | 00:02:41.723 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 6s [id=8cad1a1b-0c91-4f9e-9f73-c0ce2aef4dc9/2b3ca686-3ae7-468f-a1a4-0ae990752473] 2025-04-04 00:02:45.825736 | orchestrator | 00:02:45.825 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-04-04 00:02:55.829741 | orchestrator | 00:02:55.829 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-04-04 00:02:56.399382 | orchestrator | 00:02:56.398 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=6c83e119-4beb-4331-94c6-3718c4626732] 2025-04-04 00:02:56.478956 | orchestrator | 00:02:56.478 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-04-04 00:02:56.479066 | orchestrator | 00:02:56.478 STDOUT terraform: Outputs: 2025-04-04 00:02:56.479090 | orchestrator | 00:02:56.478 STDOUT terraform: manager_address = 2025-04-04 00:02:56.486880 | orchestrator | 00:02:56.478 STDOUT terraform: private_key = 2025-04-04 00:03:06.691494 | orchestrator | changed 2025-04-04 00:03:06.727437 | 2025-04-04 00:03:06.727554 | TASK [Fetch manager address] 2025-04-04 00:03:07.140039 | orchestrator | ok 2025-04-04 00:03:07.153725 | 2025-04-04 00:03:07.153832 | TASK [Set manager_host address] 2025-04-04 00:03:07.267565 | orchestrator | ok 2025-04-04 00:03:07.278337 | 2025-04-04 00:03:07.278446 | LOOP [Update ansible collections] 2025-04-04 00:03:07.949669 | orchestrator | changed 2025-04-04 00:03:08.665872 | orchestrator | changed 2025-04-04 00:03:08.688151 | 2025-04-04 00:03:08.688299 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-04 00:03:19.210105 | orchestrator | ok 2025-04-04 00:03:19.224153 | 2025-04-04 00:03:19.224476 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-04 00:04:19.266165 | orchestrator | ok 2025-04-04 00:04:19.274979 | 2025-04-04 00:04:19.275134 | TASK [Fetch manager ssh hostkey] 2025-04-04 00:04:20.352609 | orchestrator | Output suppressed because no_log was given 2025-04-04 00:04:20.369172 | 2025-04-04 00:04:20.369299 | TASK [Get ssh keypair from terraform environment] 2025-04-04 00:04:20.934475 | orchestrator | changed 2025-04-04 00:04:20.953527 | 2025-04-04 00:04:20.953654 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-04 00:04:20.994066 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-04-04 00:04:21.004462 | 2025-04-04 00:04:21.004549 | TASK [Run manager part 0] 2025-04-04 00:04:21.831495 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-04 00:04:21.872350 | orchestrator | 2025-04-04 00:04:23.895391 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-04-04 00:04:23.895438 | orchestrator | 2025-04-04 00:04:23.895456 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-04-04 00:04:23.895469 | orchestrator | ok: [testbed-manager] 2025-04-04 00:04:25.909275 | orchestrator | 2025-04-04 00:04:25.909322 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-04 00:04:25.909331 | orchestrator | 2025-04-04 00:04:25.909337 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-04 00:04:25.909347 | orchestrator | ok: [testbed-manager] 2025-04-04 00:04:26.564063 | orchestrator | 2025-04-04 00:04:26.564107 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-04 00:04:26.564119 | orchestrator | ok: [testbed-manager] 2025-04-04 00:04:26.602911 | orchestrator | 2025-04-04 00:04:26.602923 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-04 00:04:26.602930 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:04:26.635730 | orchestrator | 2025-04-04 00:04:26.635740 | orchestrator | TASK [Update package cache] **************************************************** 2025-04-04 00:04:26.635747 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:04:26.660166 | orchestrator | 2025-04-04 00:04:26.660177 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-04 00:04:26.660183 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:04:26.681180 | orchestrator | 2025-04-04 00:04:26.681190 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-04 00:04:26.681196 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:04:26.702863 | orchestrator | 2025-04-04 00:04:26.702872 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-04 00:04:26.702878 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:04:26.726943 | orchestrator | 2025-04-04 00:04:26.726952 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-04-04 00:04:26.726958 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:04:26.749761 | orchestrator | 2025-04-04 00:04:26.749774 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-04-04 00:04:26.749781 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:04:27.617258 | orchestrator | 2025-04-04 00:04:27.617318 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-04-04 00:04:27.617344 | orchestrator | changed: [testbed-manager] 2025-04-04 00:07:28.435799 | orchestrator | 2025-04-04 00:07:28.435907 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-04-04 00:07:28.435947 | orchestrator | changed: [testbed-manager] 2025-04-04 00:09:02.599866 | orchestrator | 2025-04-04 00:09:02.599978 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-04 00:09:02.600013 | orchestrator | changed: [testbed-manager] 2025-04-04 00:09:29.114429 | orchestrator | 2025-04-04 00:09:29.114543 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-04 00:09:29.114578 | orchestrator | changed: [testbed-manager] 2025-04-04 00:09:39.540911 | orchestrator | 2025-04-04 00:09:39.541008 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-04 00:09:39.541067 | orchestrator | changed: [testbed-manager] 2025-04-04 00:09:39.581970 | orchestrator | 2025-04-04 00:09:39.582072 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-04 00:09:39.582117 | orchestrator | ok: [testbed-manager] 2025-04-04 00:09:40.496878 | orchestrator | 2025-04-04 00:09:40.496931 | orchestrator | TASK [Get current user] ******************************************************** 2025-04-04 00:09:40.496950 | orchestrator | ok: [testbed-manager] 2025-04-04 00:09:41.303665 | orchestrator | 2025-04-04 00:09:41.303765 | orchestrator | TASK [Create venv directory] *************************************************** 2025-04-04 00:09:41.303806 | orchestrator | changed: [testbed-manager] 2025-04-04 00:09:49.775610 | orchestrator | 2025-04-04 00:09:49.775724 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-04-04 00:09:49.775762 | orchestrator | changed: [testbed-manager] 2025-04-04 00:09:57.935251 | orchestrator | 2025-04-04 00:09:57.935390 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-04-04 00:09:57.935440 | orchestrator | changed: [testbed-manager] 2025-04-04 00:10:01.761820 | orchestrator | 2025-04-04 00:10:01.761896 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-04-04 00:10:01.761924 | orchestrator | changed: [testbed-manager] 2025-04-04 00:10:03.830712 | orchestrator | 2025-04-04 00:10:03.830815 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-04-04 00:10:03.830849 | orchestrator | changed: [testbed-manager] 2025-04-04 00:10:05.035220 | orchestrator | 2025-04-04 00:10:05.035326 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-04-04 00:10:05.035361 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-04 00:10:05.076377 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-04 00:10:05.076455 | orchestrator | 2025-04-04 00:10:05.076473 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-04-04 00:10:05.076496 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-04 00:10:08.368154 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-04 00:10:08.368206 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-04 00:10:08.368214 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-04 00:10:08.368228 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-04 00:10:08.990991 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-04 00:10:08.991097 | orchestrator | 2025-04-04 00:10:08.991114 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-04-04 00:10:08.991139 | orchestrator | changed: [testbed-manager] 2025-04-04 00:10:29.139639 | orchestrator | 2025-04-04 00:10:29.139750 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-04-04 00:10:29.139786 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-04-04 00:10:31.881518 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-04-04 00:10:31.881567 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-04-04 00:10:31.881574 | orchestrator | 2025-04-04 00:10:31.881581 | orchestrator | TASK [Install local collections] *********************************************** 2025-04-04 00:10:31.881594 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-04-04 00:10:33.309503 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-04-04 00:10:33.309600 | orchestrator | 2025-04-04 00:10:33.309618 | orchestrator | PLAY [Create operator user] **************************************************** 2025-04-04 00:10:33.309635 | orchestrator | 2025-04-04 00:10:33.309650 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-04 00:10:33.309680 | orchestrator | ok: [testbed-manager] 2025-04-04 00:10:33.351191 | orchestrator | 2025-04-04 00:10:33.351264 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-04 00:10:33.351293 | orchestrator | ok: [testbed-manager] 2025-04-04 00:10:33.406690 | orchestrator | 2025-04-04 00:10:33.406756 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-04 00:10:33.406784 | orchestrator | ok: [testbed-manager] 2025-04-04 00:10:34.242127 | orchestrator | 2025-04-04 00:10:34.242185 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-04 00:10:34.242209 | orchestrator | changed: [testbed-manager] 2025-04-04 00:10:35.048062 | orchestrator | 2025-04-04 00:10:35.048149 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-04 00:10:35.048177 | orchestrator | changed: [testbed-manager] 2025-04-04 00:10:36.504118 | orchestrator | 2025-04-04 00:10:36.504229 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-04 00:10:36.504265 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-04-04 00:10:37.995495 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-04-04 00:10:37.995597 | orchestrator | 2025-04-04 00:10:37.995616 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-04 00:10:37.995646 | orchestrator | changed: [testbed-manager] 2025-04-04 00:10:39.870475 | orchestrator | 2025-04-04 00:10:39.870530 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-04 00:10:39.870551 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-04-04 00:10:40.612370 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-04-04 00:10:40.612423 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-04-04 00:10:40.612435 | orchestrator | 2025-04-04 00:10:40.612444 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-04 00:10:40.612462 | orchestrator | changed: [testbed-manager] 2025-04-04 00:10:40.680344 | orchestrator | 2025-04-04 00:10:40.680403 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-04 00:10:40.680421 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:10:41.665975 | orchestrator | 2025-04-04 00:10:41.666164 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-04 00:10:41.666205 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-04 00:10:41.705097 | orchestrator | changed: [testbed-manager] 2025-04-04 00:10:41.705181 | orchestrator | 2025-04-04 00:10:41.705199 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-04 00:10:41.705225 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:10:41.741380 | orchestrator | 2025-04-04 00:10:41.741404 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-04 00:10:41.741423 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:10:41.775408 | orchestrator | 2025-04-04 00:10:41.775477 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-04 00:10:41.775503 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:10:41.818770 | orchestrator | 2025-04-04 00:10:41.818797 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-04 00:10:41.818815 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:10:42.584923 | orchestrator | 2025-04-04 00:10:42.585010 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-04 00:10:42.585059 | orchestrator | ok: [testbed-manager] 2025-04-04 00:10:44.050401 | orchestrator | 2025-04-04 00:10:44.050702 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-04 00:10:44.050724 | orchestrator | 2025-04-04 00:10:44.050737 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-04 00:10:44.050763 | orchestrator | ok: [testbed-manager] 2025-04-04 00:10:45.079469 | orchestrator | 2025-04-04 00:10:45.079516 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-04-04 00:10:45.079530 | orchestrator | changed: [testbed-manager] 2025-04-04 00:10:45.170987 | orchestrator | 2025-04-04 00:10:45.171087 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:10:45.171095 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-04-04 00:10:45.171101 | orchestrator | 2025-04-04 00:10:45.249014 | orchestrator | changed 2025-04-04 00:10:45.282631 | 2025-04-04 00:10:45.282781 | TASK [Point out that the log in on the manager is now possible] 2025-04-04 00:10:45.319115 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-04-04 00:10:45.329333 | 2025-04-04 00:10:45.329433 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-04 00:10:45.367755 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-04-04 00:10:45.378418 | 2025-04-04 00:10:45.378522 | TASK [Run manager part 1 + 2] 2025-04-04 00:10:46.196063 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-04 00:10:46.248602 | orchestrator | 2025-04-04 00:10:48.688606 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-04-04 00:10:48.688708 | orchestrator | 2025-04-04 00:10:48.688759 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-04 00:10:48.688798 | orchestrator | ok: [testbed-manager] 2025-04-04 00:10:48.726142 | orchestrator | 2025-04-04 00:10:48.726217 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-04 00:10:48.726253 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:10:48.762948 | orchestrator | 2025-04-04 00:10:48.763004 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-04 00:10:48.763049 | orchestrator | ok: [testbed-manager] 2025-04-04 00:10:48.796227 | orchestrator | 2025-04-04 00:10:48.796280 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-04 00:10:48.796305 | orchestrator | ok: [testbed-manager] 2025-04-04 00:10:48.855688 | orchestrator | 2025-04-04 00:10:48.855744 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-04 00:10:48.855769 | orchestrator | ok: [testbed-manager] 2025-04-04 00:10:48.910551 | orchestrator | 2025-04-04 00:10:48.910613 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-04 00:10:48.910639 | orchestrator | ok: [testbed-manager] 2025-04-04 00:10:48.964606 | orchestrator | 2025-04-04 00:10:48.964659 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-04 00:10:48.964686 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-04-04 00:10:49.692862 | orchestrator | 2025-04-04 00:10:49.692957 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-04 00:10:49.692990 | orchestrator | ok: [testbed-manager] 2025-04-04 00:10:49.734822 | orchestrator | 2025-04-04 00:10:49.734896 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-04 00:10:49.734924 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:10:51.131717 | orchestrator | 2025-04-04 00:10:51.131805 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-04 00:10:51.131842 | orchestrator | changed: [testbed-manager] 2025-04-04 00:10:51.702588 | orchestrator | 2025-04-04 00:10:51.702667 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-04 00:10:51.702697 | orchestrator | ok: [testbed-manager] 2025-04-04 00:10:52.826517 | orchestrator | 2025-04-04 00:10:52.826581 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-04 00:10:52.826604 | orchestrator | changed: [testbed-manager] 2025-04-04 00:11:06.605461 | orchestrator | 2025-04-04 00:11:06.605520 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-04 00:11:06.605535 | orchestrator | changed: [testbed-manager] 2025-04-04 00:11:07.293444 | orchestrator | 2025-04-04 00:11:07.293530 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-04 00:11:07.293559 | orchestrator | ok: [testbed-manager] 2025-04-04 00:11:07.340558 | orchestrator | 2025-04-04 00:11:07.340593 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-04 00:11:07.340614 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:11:08.301947 | orchestrator | 2025-04-04 00:11:08.302076 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-04-04 00:11:08.302101 | orchestrator | changed: [testbed-manager] 2025-04-04 00:11:09.423854 | orchestrator | 2025-04-04 00:11:09.423962 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-04-04 00:11:09.423994 | orchestrator | changed: [testbed-manager] 2025-04-04 00:11:10.009380 | orchestrator | 2025-04-04 00:11:10.009439 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-04-04 00:11:10.009459 | orchestrator | changed: [testbed-manager] 2025-04-04 00:11:10.049716 | orchestrator | 2025-04-04 00:11:10.049811 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-04-04 00:11:10.049845 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-04 00:11:12.426511 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-04 00:11:12.426579 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-04 00:11:12.426591 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-04 00:11:12.426610 | orchestrator | changed: [testbed-manager] 2025-04-04 00:11:22.432406 | orchestrator | 2025-04-04 00:11:22.432515 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-04-04 00:11:22.432551 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-04-04 00:11:23.543588 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-04-04 00:11:23.543818 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-04-04 00:11:23.543840 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-04-04 00:11:23.543858 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-04-04 00:11:23.543872 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-04-04 00:11:23.543887 | orchestrator | 2025-04-04 00:11:23.543902 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-04-04 00:11:23.543949 | orchestrator | changed: [testbed-manager] 2025-04-04 00:11:23.584719 | orchestrator | 2025-04-04 00:11:23.584817 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-04-04 00:11:23.584850 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:11:26.347772 | orchestrator | 2025-04-04 00:11:26.347857 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-04-04 00:11:26.347889 | orchestrator | changed: [testbed-manager] 2025-04-04 00:11:26.386683 | orchestrator | 2025-04-04 00:11:26.386768 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-04-04 00:11:26.386798 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:13:19.865053 | orchestrator | 2025-04-04 00:13:19.865186 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-04-04 00:13:19.865225 | orchestrator | changed: [testbed-manager] 2025-04-04 00:13:21.152900 | orchestrator | 2025-04-04 00:13:21.153032 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-04 00:13:21.153069 | orchestrator | ok: [testbed-manager] 2025-04-04 00:13:21.250242 | orchestrator | 2025-04-04 00:13:21.250471 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:13:21.250494 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-04-04 00:13:21.250509 | orchestrator | 2025-04-04 00:13:21.506125 | orchestrator | changed 2025-04-04 00:13:21.525694 | 2025-04-04 00:13:21.525819 | TASK [Reboot manager] 2025-04-04 00:13:23.100097 | orchestrator | changed 2025-04-04 00:13:23.119089 | 2025-04-04 00:13:23.119219 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-04 00:13:41.715078 | orchestrator | ok 2025-04-04 00:13:41.726710 | 2025-04-04 00:13:41.726841 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-04 00:14:41.772889 | orchestrator | ok 2025-04-04 00:14:41.785645 | 2025-04-04 00:14:41.785769 | TASK [Deploy manager + bootstrap nodes] 2025-04-04 00:14:46.260626 | orchestrator | 2025-04-04 00:14:46.264696 | orchestrator | # DEPLOY MANAGER 2025-04-04 00:14:46.264737 | orchestrator | 2025-04-04 00:14:46.264755 | orchestrator | + set -e 2025-04-04 00:14:46.264801 | orchestrator | + echo 2025-04-04 00:14:46.264820 | orchestrator | + echo '# DEPLOY MANAGER' 2025-04-04 00:14:46.264837 | orchestrator | + echo 2025-04-04 00:14:46.264861 | orchestrator | + cat /opt/manager-vars.sh 2025-04-04 00:14:46.264895 | orchestrator | export NUMBER_OF_NODES=6 2025-04-04 00:14:46.265156 | orchestrator | 2025-04-04 00:14:46.265178 | orchestrator | export CEPH_VERSION=quincy 2025-04-04 00:14:46.265193 | orchestrator | export CONFIGURATION_VERSION=main 2025-04-04 00:14:46.265208 | orchestrator | export MANAGER_VERSION=8.1.0 2025-04-04 00:14:46.265222 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-04-04 00:14:46.265236 | orchestrator | 2025-04-04 00:14:46.265251 | orchestrator | export ARA=false 2025-04-04 00:14:46.265265 | orchestrator | export TEMPEST=false 2025-04-04 00:14:46.265279 | orchestrator | export IS_ZUUL=true 2025-04-04 00:14:46.265293 | orchestrator | 2025-04-04 00:14:46.265307 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-04-04 00:14:46.265322 | orchestrator | export EXTERNAL_API=false 2025-04-04 00:14:46.265336 | orchestrator | 2025-04-04 00:14:46.265349 | orchestrator | export IMAGE_USER=ubuntu 2025-04-04 00:14:46.265363 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-04-04 00:14:46.265378 | orchestrator | 2025-04-04 00:14:46.265392 | orchestrator | export CEPH_STACK=ceph-ansible 2025-04-04 00:14:46.265410 | orchestrator | 2025-04-04 00:14:46.266638 | orchestrator | + echo 2025-04-04 00:14:46.266660 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-04 00:14:46.266679 | orchestrator | ++ export INTERACTIVE=false 2025-04-04 00:14:46.266785 | orchestrator | ++ INTERACTIVE=false 2025-04-04 00:14:46.266803 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-04 00:14:46.266825 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-04 00:14:46.266844 | orchestrator | + source /opt/manager-vars.sh 2025-04-04 00:14:46.266859 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-04 00:14:46.266873 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-04 00:14:46.266887 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-04 00:14:46.266905 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-04 00:14:46.266919 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-04 00:14:46.266933 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-04 00:14:46.266954 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-04 00:14:46.267030 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-04 00:14:46.267180 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-04 00:14:46.267199 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-04 00:14:46.267213 | orchestrator | ++ export ARA=false 2025-04-04 00:14:46.267228 | orchestrator | ++ ARA=false 2025-04-04 00:14:46.267242 | orchestrator | ++ export TEMPEST=false 2025-04-04 00:14:46.267256 | orchestrator | ++ TEMPEST=false 2025-04-04 00:14:46.267269 | orchestrator | ++ export IS_ZUUL=true 2025-04-04 00:14:46.267284 | orchestrator | ++ IS_ZUUL=true 2025-04-04 00:14:46.267298 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-04-04 00:14:46.267312 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-04-04 00:14:46.267333 | orchestrator | ++ export EXTERNAL_API=false 2025-04-04 00:14:46.267348 | orchestrator | ++ EXTERNAL_API=false 2025-04-04 00:14:46.267361 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-04 00:14:46.267375 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-04 00:14:46.267393 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-04 00:14:46.341648 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-04 00:14:46.341701 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-04 00:14:46.341717 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-04 00:14:46.341731 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-04-04 00:14:46.341762 | orchestrator | + docker version 2025-04-04 00:14:46.723742 | orchestrator | Client: Docker Engine - Community 2025-04-04 00:14:46.726162 | orchestrator | Version: 26.1.4 2025-04-04 00:14:46.726198 | orchestrator | API version: 1.45 2025-04-04 00:14:46.726212 | orchestrator | Go version: go1.21.11 2025-04-04 00:14:46.726226 | orchestrator | Git commit: 5650f9b 2025-04-04 00:14:46.726240 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-04-04 00:14:46.726255 | orchestrator | OS/Arch: linux/amd64 2025-04-04 00:14:46.726270 | orchestrator | Context: default 2025-04-04 00:14:46.726284 | orchestrator | 2025-04-04 00:14:46.726298 | orchestrator | Server: Docker Engine - Community 2025-04-04 00:14:46.726313 | orchestrator | Engine: 2025-04-04 00:14:46.726327 | orchestrator | Version: 26.1.4 2025-04-04 00:14:46.726341 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-04-04 00:14:46.726355 | orchestrator | Go version: go1.21.11 2025-04-04 00:14:46.726371 | orchestrator | Git commit: de5c9cf 2025-04-04 00:14:46.726416 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-04-04 00:14:46.726431 | orchestrator | OS/Arch: linux/amd64 2025-04-04 00:14:46.726446 | orchestrator | Experimental: false 2025-04-04 00:14:46.726460 | orchestrator | containerd: 2025-04-04 00:14:46.726474 | orchestrator | Version: 1.7.27 2025-04-04 00:14:46.726488 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-04-04 00:14:46.726502 | orchestrator | runc: 2025-04-04 00:14:46.726516 | orchestrator | Version: 1.2.5 2025-04-04 00:14:46.726530 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-04-04 00:14:46.726545 | orchestrator | docker-init: 2025-04-04 00:14:46.726559 | orchestrator | Version: 0.19.0 2025-04-04 00:14:46.726573 | orchestrator | GitCommit: de40ad0 2025-04-04 00:14:46.726592 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-04-04 00:14:46.735635 | orchestrator | + set -e 2025-04-04 00:14:46.735749 | orchestrator | + source /opt/manager-vars.sh 2025-04-04 00:14:46.735773 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-04 00:14:46.736003 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-04 00:14:46.736021 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-04 00:14:46.736036 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-04 00:14:46.736050 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-04 00:14:46.736064 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-04 00:14:46.736078 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-04 00:14:46.736092 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-04 00:14:46.736106 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-04 00:14:46.736120 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-04 00:14:46.736134 | orchestrator | ++ export ARA=false 2025-04-04 00:14:46.736148 | orchestrator | ++ ARA=false 2025-04-04 00:14:46.736162 | orchestrator | ++ export TEMPEST=false 2025-04-04 00:14:46.736176 | orchestrator | ++ TEMPEST=false 2025-04-04 00:14:46.736190 | orchestrator | ++ export IS_ZUUL=true 2025-04-04 00:14:46.736204 | orchestrator | ++ IS_ZUUL=true 2025-04-04 00:14:46.736219 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-04-04 00:14:46.736233 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-04-04 00:14:46.736247 | orchestrator | ++ export EXTERNAL_API=false 2025-04-04 00:14:46.736273 | orchestrator | ++ EXTERNAL_API=false 2025-04-04 00:14:46.736287 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-04 00:14:46.736301 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-04 00:14:46.736315 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-04 00:14:46.736334 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-04 00:14:46.736353 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-04 00:14:46.736366 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-04 00:14:46.736380 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-04 00:14:46.736394 | orchestrator | ++ export INTERACTIVE=false 2025-04-04 00:14:46.736408 | orchestrator | ++ INTERACTIVE=false 2025-04-04 00:14:46.736422 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-04 00:14:46.736436 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-04 00:14:46.736454 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-04 00:14:46.744752 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-04-04 00:14:46.744782 | orchestrator | + set -e 2025-04-04 00:14:46.754114 | orchestrator | + VERSION=8.1.0 2025-04-04 00:14:46.754138 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-04-04 00:14:46.754166 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-04 00:14:46.759588 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-04-04 00:14:46.759620 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-04-04 00:14:46.764142 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-04-04 00:14:46.772612 | orchestrator | /opt/configuration ~ 2025-04-04 00:14:46.775767 | orchestrator | + set -e 2025-04-04 00:14:46.775790 | orchestrator | + pushd /opt/configuration 2025-04-04 00:14:46.775806 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-04 00:14:46.775826 | orchestrator | + source /opt/venv/bin/activate 2025-04-04 00:14:46.777225 | orchestrator | ++ deactivate nondestructive 2025-04-04 00:14:46.777401 | orchestrator | ++ '[' -n '' ']' 2025-04-04 00:14:46.777419 | orchestrator | ++ '[' -n '' ']' 2025-04-04 00:14:46.777434 | orchestrator | ++ hash -r 2025-04-04 00:14:46.777448 | orchestrator | ++ '[' -n '' ']' 2025-04-04 00:14:46.777463 | orchestrator | ++ unset VIRTUAL_ENV 2025-04-04 00:14:46.777477 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-04-04 00:14:46.777492 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-04-04 00:14:46.777524 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-04-04 00:14:46.777637 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-04-04 00:14:46.777656 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-04-04 00:14:46.777670 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-04-04 00:14:46.777685 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-04 00:14:46.777699 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-04 00:14:46.777714 | orchestrator | ++ export PATH 2025-04-04 00:14:46.777727 | orchestrator | ++ '[' -n '' ']' 2025-04-04 00:14:46.777741 | orchestrator | ++ '[' -z '' ']' 2025-04-04 00:14:46.777755 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-04-04 00:14:46.777769 | orchestrator | ++ PS1='(venv) ' 2025-04-04 00:14:46.777783 | orchestrator | ++ export PS1 2025-04-04 00:14:46.777797 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-04-04 00:14:46.777811 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-04-04 00:14:46.777824 | orchestrator | ++ hash -r 2025-04-04 00:14:46.777843 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-04-04 00:14:48.268078 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-04-04 00:14:48.269662 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-04-04 00:14:48.271534 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-04-04 00:14:48.273619 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-04-04 00:14:48.275178 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (24.2) 2025-04-04 00:14:48.288678 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.1.8) 2025-04-04 00:14:48.290583 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-04-04 00:14:48.292090 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-04-04 00:14:48.293686 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-04-04 00:14:48.340422 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.1) 2025-04-04 00:14:48.342581 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-04-04 00:14:48.344276 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.3.0) 2025-04-04 00:14:48.346093 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.1.31) 2025-04-04 00:14:48.351303 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-04-04 00:14:48.618375 | orchestrator | ++ which gilt 2025-04-04 00:14:48.622656 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-04-04 00:14:48.931365 | orchestrator | + /opt/venv/bin/gilt overlay 2025-04-04 00:14:48.931434 | orchestrator | osism.cfg-generics: 2025-04-04 00:14:50.598537 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-04-04 00:14:50.598678 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-04-04 00:14:50.599198 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-04-04 00:14:50.599227 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-04-04 00:14:51.722147 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-04-04 00:14:51.722331 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-04-04 00:14:51.740819 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-04-04 00:14:52.157020 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-04-04 00:14:52.229818 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-04 00:14:52.229850 | orchestrator | + deactivate 2025-04-04 00:14:52.229887 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-04-04 00:14:52.229904 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-04 00:14:52.229923 | orchestrator | + export PATH 2025-04-04 00:14:52.230167 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-04-04 00:14:52.230187 | orchestrator | + '[' -n '' ']' 2025-04-04 00:14:52.230201 | orchestrator | + hash -r 2025-04-04 00:14:52.230220 | orchestrator | + '[' -n '' ']' 2025-04-04 00:14:52.230406 | orchestrator | + unset VIRTUAL_ENV 2025-04-04 00:14:52.230424 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-04-04 00:14:52.230439 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-04-04 00:14:52.230453 | orchestrator | + unset -f deactivate 2025-04-04 00:14:52.230475 | orchestrator | ~ 2025-04-04 00:14:52.233170 | orchestrator | + popd 2025-04-04 00:14:52.233198 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-04-04 00:14:52.234619 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-04-04 00:14:52.234645 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-04 00:14:52.311356 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-04 00:14:52.358303 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-04-04 00:14:52.358337 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-04-04 00:14:52.358358 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-04 00:14:52.358530 | orchestrator | + source /opt/venv/bin/activate 2025-04-04 00:14:52.358552 | orchestrator | ++ deactivate nondestructive 2025-04-04 00:14:52.358578 | orchestrator | ++ '[' -n '' ']' 2025-04-04 00:14:52.358808 | orchestrator | ++ '[' -n '' ']' 2025-04-04 00:14:52.358829 | orchestrator | ++ hash -r 2025-04-04 00:14:52.359009 | orchestrator | ++ '[' -n '' ']' 2025-04-04 00:14:52.359028 | orchestrator | ++ unset VIRTUAL_ENV 2025-04-04 00:14:52.359043 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-04-04 00:14:52.359061 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-04-04 00:14:52.359234 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-04-04 00:14:52.359296 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-04-04 00:14:52.359312 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-04-04 00:14:52.359330 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-04-04 00:14:52.359419 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-04 00:14:52.359439 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-04 00:14:52.359453 | orchestrator | ++ export PATH 2025-04-04 00:14:52.359470 | orchestrator | ++ '[' -n '' ']' 2025-04-04 00:14:52.359608 | orchestrator | ++ '[' -z '' ']' 2025-04-04 00:14:52.359654 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-04-04 00:14:52.359669 | orchestrator | ++ PS1='(venv) ' 2025-04-04 00:14:52.359683 | orchestrator | ++ export PS1 2025-04-04 00:14:52.359697 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-04-04 00:14:52.359711 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-04-04 00:14:52.359730 | orchestrator | ++ hash -r 2025-04-04 00:14:52.360114 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-04-04 00:14:53.880612 | orchestrator | 2025-04-04 00:14:54.636460 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-04-04 00:14:54.636577 | orchestrator | 2025-04-04 00:14:54.636595 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-04 00:14:54.636628 | orchestrator | ok: [testbed-manager] 2025-04-04 00:14:55.871836 | orchestrator | 2025-04-04 00:14:55.871998 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-04 00:14:55.872041 | orchestrator | changed: [testbed-manager] 2025-04-04 00:14:59.710538 | orchestrator | 2025-04-04 00:14:59.710687 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-04-04 00:14:59.710709 | orchestrator | 2025-04-04 00:14:59.710725 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-04 00:14:59.710758 | orchestrator | ok: [testbed-manager] 2025-04-04 00:15:06.312415 | orchestrator | 2025-04-04 00:15:06.312554 | orchestrator | TASK [Pull images] ************************************************************* 2025-04-04 00:15:06.312617 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-04-04 00:16:33.320472 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-04-04 00:16:33.320630 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-04-04 00:16:33.320654 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-04-04 00:16:33.320670 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-04-04 00:16:33.320686 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.1-alpine) 2025-04-04 00:16:33.320701 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-04-04 00:16:33.320715 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-04-04 00:16:33.320730 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-04-04 00:16:33.320752 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-04-04 00:16:33.320768 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.2.1) 2025-04-04 00:16:33.320783 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.2) 2025-04-04 00:16:33.320797 | orchestrator | 2025-04-04 00:16:33.320812 | orchestrator | TASK [Check status] ************************************************************ 2025-04-04 00:16:33.320846 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-04 00:16:33.358413 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-04-04 00:16:33.358519 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-04-04 00:16:33.358549 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-04-04 00:16:33.358574 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (116 retries left). 2025-04-04 00:16:33.358602 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j953357563060.1583', 'results_file': '/home/dragon/.ansible_async/j953357563060.1583', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-04-04 00:16:33.358646 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j930754021132.1608', 'results_file': '/home/dragon/.ansible_async/j930754021132.1608', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-04-04 00:16:33.358671 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-04 00:16:33.358695 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-04-04 00:16:33.358721 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j707789517440.1633', 'results_file': '/home/dragon/.ansible_async/j707789517440.1633', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-04 00:16:33.358754 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j134093305005.1668', 'results_file': '/home/dragon/.ansible_async/j134093305005.1668', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-04 00:16:33.358785 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j657892577923.1700', 'results_file': '/home/dragon/.ansible_async/j657892577923.1700', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-04 00:16:33.358810 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j419914548302.1732', 'results_file': '/home/dragon/.ansible_async/j419914548302.1732', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-04-04 00:16:33.358833 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-04 00:16:33.358887 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j808512980999.1768', 'results_file': '/home/dragon/.ansible_async/j808512980999.1768', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-04-04 00:16:33.358912 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j440824513750.1800', 'results_file': '/home/dragon/.ansible_async/j440824513750.1800', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-04 00:16:33.358936 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j445857416936.1839', 'results_file': '/home/dragon/.ansible_async/j445857416936.1839', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-04-04 00:16:33.358995 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j503395487446.1866', 'results_file': '/home/dragon/.ansible_async/j503395487446.1866', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-04-04 00:16:33.359020 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j386656769953.1898', 'results_file': '/home/dragon/.ansible_async/j386656769953.1898', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-04-04 00:16:33.359045 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j873505037785.1932', 'results_file': '/home/dragon/.ansible_async/j873505037785.1932', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-04-04 00:16:33.359070 | orchestrator | 2025-04-04 00:16:33.359096 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-04-04 00:16:33.359136 | orchestrator | ok: [testbed-manager] 2025-04-04 00:16:33.870061 | orchestrator | 2025-04-04 00:16:33.870177 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-04-04 00:16:33.870213 | orchestrator | changed: [testbed-manager] 2025-04-04 00:16:34.241498 | orchestrator | 2025-04-04 00:16:34.241601 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-04-04 00:16:34.241635 | orchestrator | changed: [testbed-manager] 2025-04-04 00:16:34.614232 | orchestrator | 2025-04-04 00:16:34.614355 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-04 00:16:34.614392 | orchestrator | changed: [testbed-manager] 2025-04-04 00:16:34.678876 | orchestrator | 2025-04-04 00:16:34.678996 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-04-04 00:16:34.679029 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:16:35.049335 | orchestrator | 2025-04-04 00:16:35.049449 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-04-04 00:16:35.049483 | orchestrator | ok: [testbed-manager] 2025-04-04 00:16:35.258517 | orchestrator | 2025-04-04 00:16:35.258624 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-04-04 00:16:35.258656 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:16:37.217463 | orchestrator | 2025-04-04 00:16:37.217579 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-04-04 00:16:37.217596 | orchestrator | 2025-04-04 00:16:37.217609 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-04 00:16:37.217641 | orchestrator | ok: [testbed-manager] 2025-04-04 00:16:37.452033 | orchestrator | 2025-04-04 00:16:37.452110 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-04-04 00:16:37.452138 | orchestrator | 2025-04-04 00:16:37.576010 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-04-04 00:16:37.576084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-04-04 00:16:38.839573 | orchestrator | 2025-04-04 00:16:38.839694 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-04-04 00:16:38.839727 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-04-04 00:16:40.887360 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-04-04 00:16:40.887474 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-04-04 00:16:40.887489 | orchestrator | 2025-04-04 00:16:40.887502 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-04-04 00:16:40.887531 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-04-04 00:16:41.630898 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-04-04 00:16:41.631065 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-04-04 00:16:41.631085 | orchestrator | 2025-04-04 00:16:41.631101 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-04-04 00:16:41.631134 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-04 00:16:42.375749 | orchestrator | changed: [testbed-manager] 2025-04-04 00:16:42.375857 | orchestrator | 2025-04-04 00:16:42.375876 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-04-04 00:16:42.375908 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-04 00:16:42.471613 | orchestrator | changed: [testbed-manager] 2025-04-04 00:16:42.471676 | orchestrator | 2025-04-04 00:16:42.471692 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-04-04 00:16:42.471718 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:16:42.905440 | orchestrator | 2025-04-04 00:16:42.905543 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-04-04 00:16:42.905575 | orchestrator | ok: [testbed-manager] 2025-04-04 00:16:43.031634 | orchestrator | 2025-04-04 00:16:43.031732 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-04-04 00:16:43.031764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-04-04 00:16:44.158095 | orchestrator | 2025-04-04 00:16:44.158242 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-04-04 00:16:44.158282 | orchestrator | changed: [testbed-manager] 2025-04-04 00:16:45.105178 | orchestrator | 2025-04-04 00:16:45.105296 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-04-04 00:16:45.105331 | orchestrator | changed: [testbed-manager] 2025-04-04 00:16:48.545440 | orchestrator | 2025-04-04 00:16:48.545562 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-04-04 00:16:48.545600 | orchestrator | changed: [testbed-manager] 2025-04-04 00:16:48.940229 | orchestrator | 2025-04-04 00:16:48.940276 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-04-04 00:16:48.940306 | orchestrator | 2025-04-04 00:16:49.069674 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-04-04 00:16:49.069716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-04-04 00:16:52.441111 | orchestrator | 2025-04-04 00:16:52.441246 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-04-04 00:16:52.441281 | orchestrator | ok: [testbed-manager] 2025-04-04 00:16:52.602783 | orchestrator | 2025-04-04 00:16:52.602815 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-04 00:16:52.602837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-04-04 00:16:54.102122 | orchestrator | 2025-04-04 00:16:54.102232 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-04-04 00:16:54.102263 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-04-04 00:16:54.267522 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-04-04 00:16:54.267547 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-04-04 00:16:54.267560 | orchestrator | 2025-04-04 00:16:54.267573 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-04-04 00:16:54.267592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-04-04 00:16:55.092642 | orchestrator | 2025-04-04 00:16:55.092716 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-04-04 00:16:55.092742 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-04-04 00:16:55.880896 | orchestrator | 2025-04-04 00:16:55.881001 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-04-04 00:16:55.881040 | orchestrator | changed: [testbed-manager] 2025-04-04 00:16:56.666627 | orchestrator | 2025-04-04 00:16:56.666725 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-04 00:16:56.666757 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-04 00:16:57.148338 | orchestrator | changed: [testbed-manager] 2025-04-04 00:16:57.148427 | orchestrator | 2025-04-04 00:16:57.148444 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-04-04 00:16:57.148475 | orchestrator | changed: [testbed-manager] 2025-04-04 00:16:57.580732 | orchestrator | 2025-04-04 00:16:57.580811 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-04-04 00:16:57.580839 | orchestrator | ok: [testbed-manager] 2025-04-04 00:16:57.646089 | orchestrator | 2025-04-04 00:16:57.646120 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-04-04 00:16:57.646141 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:16:58.401840 | orchestrator | 2025-04-04 00:16:58.401892 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-04-04 00:16:58.401917 | orchestrator | changed: [testbed-manager] 2025-04-04 00:16:58.523894 | orchestrator | 2025-04-04 00:16:58.523924 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-04 00:16:58.523982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-04-04 00:16:59.579993 | orchestrator | 2025-04-04 00:16:59.580119 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-04-04 00:16:59.580153 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-04-04 00:17:00.411782 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-04-04 00:17:00.411912 | orchestrator | 2025-04-04 00:17:00.411931 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-04-04 00:17:00.412018 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-04-04 00:17:01.188561 | orchestrator | 2025-04-04 00:17:01.188688 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-04-04 00:17:01.188738 | orchestrator | changed: [testbed-manager] 2025-04-04 00:17:01.257814 | orchestrator | 2025-04-04 00:17:01.257936 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-04-04 00:17:01.258004 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:17:02.002661 | orchestrator | 2025-04-04 00:17:02.002822 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-04-04 00:17:02.002864 | orchestrator | changed: [testbed-manager] 2025-04-04 00:17:04.217023 | orchestrator | 2025-04-04 00:17:04.217151 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-04 00:17:04.217185 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-04 00:17:11.380047 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-04 00:17:11.380201 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-04 00:17:11.380221 | orchestrator | changed: [testbed-manager] 2025-04-04 00:17:11.380240 | orchestrator | 2025-04-04 00:17:11.380255 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-04-04 00:17:11.380289 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-04-04 00:17:12.146518 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-04-04 00:17:12.146622 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-04-04 00:17:12.146638 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-04-04 00:17:12.146653 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-04-04 00:17:12.146669 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-04-04 00:17:12.146684 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-04-04 00:17:12.146727 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-04-04 00:17:12.146742 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-04-04 00:17:12.146757 | orchestrator | changed: [testbed-manager] => (item=users) 2025-04-04 00:17:12.146772 | orchestrator | 2025-04-04 00:17:12.146787 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-04-04 00:17:12.146821 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-04-04 00:17:12.382452 | orchestrator | 2025-04-04 00:17:12.382494 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-04-04 00:17:12.382519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-04-04 00:17:13.189793 | orchestrator | 2025-04-04 00:17:13.189902 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-04-04 00:17:13.189933 | orchestrator | changed: [testbed-manager] 2025-04-04 00:17:13.903747 | orchestrator | 2025-04-04 00:17:13.903851 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-04-04 00:17:13.903880 | orchestrator | ok: [testbed-manager] 2025-04-04 00:17:14.715694 | orchestrator | 2025-04-04 00:17:14.715801 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-04-04 00:17:14.715836 | orchestrator | changed: [testbed-manager] 2025-04-04 00:17:20.603420 | orchestrator | 2025-04-04 00:17:20.603554 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-04-04 00:17:20.603592 | orchestrator | changed: [testbed-manager] 2025-04-04 00:17:21.668215 | orchestrator | 2025-04-04 00:17:21.668969 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-04-04 00:17:21.669055 | orchestrator | ok: [testbed-manager] 2025-04-04 00:17:44.085032 | orchestrator | 2025-04-04 00:17:44.085179 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-04-04 00:17:44.085217 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-04-04 00:17:44.193517 | orchestrator | ok: [testbed-manager] 2025-04-04 00:17:44.193589 | orchestrator | 2025-04-04 00:17:44.193606 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-04-04 00:17:44.193635 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:17:44.258384 | orchestrator | 2025-04-04 00:17:44.258994 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-04-04 00:17:44.259019 | orchestrator | 2025-04-04 00:17:44.259038 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-04-04 00:17:44.259062 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:17:44.365787 | orchestrator | 2025-04-04 00:17:44.365836 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-04 00:17:44.365861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-04-04 00:17:45.256822 | orchestrator | 2025-04-04 00:17:45.256931 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-04-04 00:17:45.257018 | orchestrator | ok: [testbed-manager] 2025-04-04 00:17:45.338418 | orchestrator | 2025-04-04 00:17:45.338464 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-04-04 00:17:45.338489 | orchestrator | ok: [testbed-manager] 2025-04-04 00:17:45.406608 | orchestrator | 2025-04-04 00:17:45.406638 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-04-04 00:17:45.406661 | orchestrator | ok: [testbed-manager] => { 2025-04-04 00:17:46.154931 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-04-04 00:17:46.155088 | orchestrator | } 2025-04-04 00:17:46.155106 | orchestrator | 2025-04-04 00:17:46.155123 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-04-04 00:17:46.155154 | orchestrator | ok: [testbed-manager] 2025-04-04 00:17:47.162873 | orchestrator | 2025-04-04 00:17:47.163032 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-04-04 00:17:47.163067 | orchestrator | ok: [testbed-manager] 2025-04-04 00:17:47.264219 | orchestrator | 2025-04-04 00:17:47.264286 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-04-04 00:17:47.264314 | orchestrator | ok: [testbed-manager] 2025-04-04 00:17:47.327897 | orchestrator | 2025-04-04 00:17:47.327988 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-04-04 00:17:47.328028 | orchestrator | ok: [testbed-manager] => { 2025-04-04 00:17:47.395216 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-04-04 00:17:47.395259 | orchestrator | } 2025-04-04 00:17:47.395274 | orchestrator | 2025-04-04 00:17:47.395289 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-04-04 00:17:47.395312 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:17:47.474705 | orchestrator | 2025-04-04 00:17:47.474743 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-04-04 00:17:47.474766 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:17:47.557801 | orchestrator | 2025-04-04 00:17:47.557842 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-04-04 00:17:47.557865 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:17:47.636977 | orchestrator | 2025-04-04 00:17:47.637020 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-04-04 00:17:47.637043 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:17:47.723635 | orchestrator | 2025-04-04 00:17:47.723685 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-04-04 00:17:47.723708 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:17:47.792376 | orchestrator | 2025-04-04 00:17:47.792421 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-04-04 00:17:47.792448 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:17:49.294392 | orchestrator | 2025-04-04 00:17:49.295110 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-04 00:17:49.295161 | orchestrator | changed: [testbed-manager] 2025-04-04 00:17:49.452351 | orchestrator | 2025-04-04 00:17:49.452405 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-04-04 00:17:49.452431 | orchestrator | ok: [testbed-manager] 2025-04-04 00:18:49.529159 | orchestrator | 2025-04-04 00:18:49.529314 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-04-04 00:18:49.529354 | orchestrator | Pausing for 60 seconds 2025-04-04 00:18:49.643848 | orchestrator | changed: [testbed-manager] 2025-04-04 00:18:49.643883 | orchestrator | 2025-04-04 00:18:49.643898 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-04-04 00:18:49.643958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-04-04 00:24:05.822130 | orchestrator | 2025-04-04 00:24:05.822270 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-04-04 00:24:05.822307 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-04-04 00:24:08.044243 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-04-04 00:24:08.044372 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-04-04 00:24:08.044390 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-04-04 00:24:08.044406 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-04-04 00:24:08.044421 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-04-04 00:24:08.044435 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-04-04 00:24:08.044450 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-04-04 00:24:08.044473 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-04-04 00:24:08.044497 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-04-04 00:24:08.044605 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-04-04 00:24:08.044624 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-04-04 00:24:08.044638 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-04-04 00:24:08.044653 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-04-04 00:24:08.044667 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-04-04 00:24:08.044681 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-04-04 00:24:08.044695 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-04-04 00:24:08.044709 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-04-04 00:24:08.044724 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-04-04 00:24:08.044751 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-04-04 00:24:08.044766 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-04-04 00:24:08.044784 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-04-04 00:24:08.044800 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-04-04 00:24:08.044816 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-04-04 00:24:08.044833 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (36 retries left). 2025-04-04 00:24:08.044886 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (35 retries left). 2025-04-04 00:24:08.044904 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (34 retries left). 2025-04-04 00:24:08.044918 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (33 retries left). 2025-04-04 00:24:08.044932 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (32 retries left). 2025-04-04 00:24:08.044946 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (31 retries left). 2025-04-04 00:24:08.044960 | orchestrator | changed: [testbed-manager] 2025-04-04 00:24:08.044976 | orchestrator | 2025-04-04 00:24:08.044992 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-04-04 00:24:08.045007 | orchestrator | 2025-04-04 00:24:08.045021 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-04 00:24:08.045055 | orchestrator | ok: [testbed-manager] 2025-04-04 00:24:08.186829 | orchestrator | 2025-04-04 00:24:08.186962 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-04-04 00:24:08.186997 | orchestrator | 2025-04-04 00:24:08.267061 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-04-04 00:24:08.267132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-04-04 00:24:10.234384 | orchestrator | 2025-04-04 00:24:10.234497 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-04-04 00:24:10.234529 | orchestrator | ok: [testbed-manager] 2025-04-04 00:24:10.295045 | orchestrator | 2025-04-04 00:24:10.295074 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-04-04 00:24:10.295094 | orchestrator | ok: [testbed-manager] 2025-04-04 00:24:10.401065 | orchestrator | 2025-04-04 00:24:10.401102 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-04-04 00:24:10.401125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-04-04 00:24:13.480373 | orchestrator | 2025-04-04 00:24:13.480501 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-04-04 00:24:13.480539 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-04-04 00:24:14.207114 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-04-04 00:24:14.207220 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-04-04 00:24:14.207237 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-04-04 00:24:14.207252 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-04-04 00:24:14.207267 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-04-04 00:24:14.207282 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-04-04 00:24:14.207297 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-04-04 00:24:14.207311 | orchestrator | 2025-04-04 00:24:14.207326 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-04-04 00:24:14.207357 | orchestrator | changed: [testbed-manager] 2025-04-04 00:24:14.289106 | orchestrator | 2025-04-04 00:24:14.289135 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-04-04 00:24:14.289157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-04-04 00:24:15.652622 | orchestrator | 2025-04-04 00:24:15.652741 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-04-04 00:24:15.652776 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-04-04 00:24:16.403221 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-04-04 00:24:16.403354 | orchestrator | 2025-04-04 00:24:16.403374 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-04-04 00:24:16.403404 | orchestrator | changed: [testbed-manager] 2025-04-04 00:24:16.468271 | orchestrator | 2025-04-04 00:24:16.468310 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-04-04 00:24:16.468332 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:24:16.543751 | orchestrator | 2025-04-04 00:24:16.543779 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-04-04 00:24:16.543800 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-04-04 00:24:18.053737 | orchestrator | 2025-04-04 00:24:18.053906 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-04-04 00:24:18.053944 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-04 00:24:18.754796 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-04 00:24:18.754948 | orchestrator | changed: [testbed-manager] 2025-04-04 00:24:18.754968 | orchestrator | 2025-04-04 00:24:18.754984 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-04-04 00:24:18.755016 | orchestrator | changed: [testbed-manager] 2025-04-04 00:24:18.855331 | orchestrator | 2025-04-04 00:24:18.855389 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-04-04 00:24:18.855417 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-04-04 00:24:19.602682 | orchestrator | 2025-04-04 00:24:19.602784 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-04-04 00:24:19.602815 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-04 00:24:20.274189 | orchestrator | changed: [testbed-manager] 2025-04-04 00:24:20.274259 | orchestrator | 2025-04-04 00:24:20.274276 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-04-04 00:24:20.274302 | orchestrator | changed: [testbed-manager] 2025-04-04 00:24:20.389437 | orchestrator | 2025-04-04 00:24:20.389473 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-04-04 00:24:20.389496 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-04-04 00:24:20.929746 | orchestrator | 2025-04-04 00:24:20.929798 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-04-04 00:24:20.929905 | orchestrator | changed: [testbed-manager] 2025-04-04 00:24:21.362631 | orchestrator | 2025-04-04 00:24:21.362746 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-04-04 00:24:21.362778 | orchestrator | changed: [testbed-manager] 2025-04-04 00:24:22.714339 | orchestrator | 2025-04-04 00:24:22.714466 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-04-04 00:24:22.714502 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-04-04 00:24:23.422678 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-04-04 00:24:23.422778 | orchestrator | 2025-04-04 00:24:23.422796 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-04-04 00:24:23.422828 | orchestrator | changed: [testbed-manager] 2025-04-04 00:24:23.790388 | orchestrator | 2025-04-04 00:24:23.790504 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-04-04 00:24:23.790538 | orchestrator | ok: [testbed-manager] 2025-04-04 00:24:23.844239 | orchestrator | 2025-04-04 00:24:23.844361 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-04-04 00:24:23.844398 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:24:24.527588 | orchestrator | 2025-04-04 00:24:24.527713 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-04-04 00:24:24.527752 | orchestrator | changed: [testbed-manager] 2025-04-04 00:24:24.667193 | orchestrator | 2025-04-04 00:24:24.667293 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-04-04 00:24:24.667324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-04-04 00:24:24.715640 | orchestrator | 2025-04-04 00:24:24.715667 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-04-04 00:24:24.715687 | orchestrator | ok: [testbed-manager] 2025-04-04 00:24:26.995286 | orchestrator | 2025-04-04 00:24:26.995413 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-04-04 00:24:26.995449 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-04-04 00:24:27.797681 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-04-04 00:24:27.797818 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-04-04 00:24:27.798622 | orchestrator | 2025-04-04 00:24:27.798647 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-04-04 00:24:27.798681 | orchestrator | changed: [testbed-manager] 2025-04-04 00:24:28.580288 | orchestrator | 2025-04-04 00:24:28.580394 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-04-04 00:24:28.580427 | orchestrator | changed: [testbed-manager] 2025-04-04 00:24:28.678170 | orchestrator | 2025-04-04 00:24:28.678206 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-04-04 00:24:28.678229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-04-04 00:24:28.745143 | orchestrator | 2025-04-04 00:24:28.745176 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-04-04 00:24:28.745197 | orchestrator | ok: [testbed-manager] 2025-04-04 00:24:29.538072 | orchestrator | 2025-04-04 00:24:29.538186 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-04-04 00:24:29.538219 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-04-04 00:24:29.632059 | orchestrator | 2025-04-04 00:24:29.632154 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-04-04 00:24:29.632179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-04-04 00:24:30.432266 | orchestrator | 2025-04-04 00:24:30.432369 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-04-04 00:24:30.432398 | orchestrator | changed: [testbed-manager] 2025-04-04 00:24:31.140713 | orchestrator | 2025-04-04 00:24:31.140833 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-04-04 00:24:31.140908 | orchestrator | ok: [testbed-manager] 2025-04-04 00:24:31.204484 | orchestrator | 2025-04-04 00:24:31.204608 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-04-04 00:24:31.204676 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:24:31.265365 | orchestrator | 2025-04-04 00:24:31.265428 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-04-04 00:24:31.265456 | orchestrator | ok: [testbed-manager] 2025-04-04 00:24:32.160005 | orchestrator | 2025-04-04 00:24:32.160113 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-04-04 00:24:32.160146 | orchestrator | changed: [testbed-manager] 2025-04-04 00:25:12.679826 | orchestrator | 2025-04-04 00:25:12.679996 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-04-04 00:25:12.680029 | orchestrator | changed: [testbed-manager] 2025-04-04 00:25:13.397987 | orchestrator | 2025-04-04 00:25:13.398137 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-04-04 00:25:13.398170 | orchestrator | ok: [testbed-manager] 2025-04-04 00:25:16.207408 | orchestrator | 2025-04-04 00:25:16.207522 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-04-04 00:25:16.207556 | orchestrator | changed: [testbed-manager] 2025-04-04 00:25:16.283670 | orchestrator | 2025-04-04 00:25:16.283702 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-04-04 00:25:16.283724 | orchestrator | ok: [testbed-manager] 2025-04-04 00:25:16.361319 | orchestrator | 2025-04-04 00:25:16.361348 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-04 00:25:16.361361 | orchestrator | 2025-04-04 00:25:16.361375 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-04-04 00:25:16.361393 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:26:16.431059 | orchestrator | 2025-04-04 00:26:16.431203 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-04-04 00:26:16.431242 | orchestrator | Pausing for 60 seconds 2025-04-04 00:26:22.016705 | orchestrator | changed: [testbed-manager] 2025-04-04 00:26:22.016835 | orchestrator | 2025-04-04 00:26:22.016904 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-04-04 00:26:22.016940 | orchestrator | changed: [testbed-manager] 2025-04-04 00:27:03.891586 | orchestrator | 2025-04-04 00:27:03.891731 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-04-04 00:27:03.891768 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-04-04 00:27:11.082000 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-04-04 00:27:11.082225 | orchestrator | changed: [testbed-manager] 2025-04-04 00:27:11.082261 | orchestrator | 2025-04-04 00:27:11.082286 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-04-04 00:27:11.082351 | orchestrator | changed: [testbed-manager] 2025-04-04 00:27:11.189021 | orchestrator | 2025-04-04 00:27:11.189143 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-04-04 00:27:11.189178 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-04-04 00:27:11.253762 | orchestrator | 2025-04-04 00:27:11.253847 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-04 00:27:11.253910 | orchestrator | 2025-04-04 00:27:11.253919 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-04-04 00:27:11.253942 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:27:11.459685 | orchestrator | 2025-04-04 00:27:11.459774 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:27:11.459782 | orchestrator | testbed-manager : ok=105 changed=57 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-04-04 00:27:11.459789 | orchestrator | 2025-04-04 00:27:11.459807 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-04 00:27:11.466832 | orchestrator | + deactivate 2025-04-04 00:27:11.466856 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-04-04 00:27:11.466864 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-04 00:27:11.466869 | orchestrator | + export PATH 2025-04-04 00:27:11.466874 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-04-04 00:27:11.466902 | orchestrator | + '[' -n '' ']' 2025-04-04 00:27:11.466908 | orchestrator | + hash -r 2025-04-04 00:27:11.466913 | orchestrator | + '[' -n '' ']' 2025-04-04 00:27:11.466918 | orchestrator | + unset VIRTUAL_ENV 2025-04-04 00:27:11.466923 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-04-04 00:27:11.466928 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-04-04 00:27:11.466933 | orchestrator | + unset -f deactivate 2025-04-04 00:27:11.466939 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-04-04 00:27:11.466949 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-04 00:27:11.467514 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-04 00:27:11.467523 | orchestrator | + local max_attempts=60 2025-04-04 00:27:11.467529 | orchestrator | + local name=ceph-ansible 2025-04-04 00:27:11.467535 | orchestrator | + local attempt_num=1 2025-04-04 00:27:11.467543 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-04 00:27:11.493610 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-04 00:27:11.493964 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-04 00:27:11.493986 | orchestrator | + local max_attempts=60 2025-04-04 00:27:11.494969 | orchestrator | + local name=kolla-ansible 2025-04-04 00:27:11.494984 | orchestrator | + local attempt_num=1 2025-04-04 00:27:11.494997 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-04 00:27:11.522282 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-04 00:27:11.523155 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-04 00:27:11.523195 | orchestrator | + local max_attempts=60 2025-04-04 00:27:11.523213 | orchestrator | + local name=osism-ansible 2025-04-04 00:27:11.523229 | orchestrator | + local attempt_num=1 2025-04-04 00:27:11.523252 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-04 00:27:11.553085 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-04 00:27:12.258374 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-04 00:27:12.258487 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-04 00:27:12.258522 | orchestrator | ++ semver 8.1.0 9.0.0 2025-04-04 00:27:12.310974 | orchestrator | + [[ -1 -ge 0 ]] 2025-04-04 00:27:12.523015 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-04-04 00:27:12.523051 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-04-04 00:27:12.523073 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-04 00:27:12.523577 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-04-04 00:27:12.523597 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-04-04 00:27:12.523610 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-04-04 00:27:12.523644 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-04-04 00:27:12.523658 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-04-04 00:27:12.523675 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-04-04 00:27:12.523688 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-04-04 00:27:12.523700 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 50 seconds (healthy) 2025-04-04 00:27:12.523739 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-04-04 00:27:12.523753 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-04-04 00:27:12.523766 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-04-04 00:27:12.523778 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-04-04 00:27:12.523791 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-04-04 00:27:12.523812 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-04-04 00:27:12.530697 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-04-04 00:27:12.530721 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-04-04 00:27:12.530734 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-04-04 00:27:12.530752 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-04-04 00:27:12.689120 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-04 00:27:12.699205 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 9 minutes ago Up 8 minutes (healthy) 2025-04-04 00:27:12.699230 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 9 minutes ago Up 3 minutes (healthy) 2025-04-04 00:27:12.699244 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 9 minutes ago Up 9 minutes (healthy) 5432/tcp 2025-04-04 00:27:12.699258 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 9 minutes ago Up 9 minutes (healthy) 6379/tcp 2025-04-04 00:27:12.699275 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-04 00:27:12.753269 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-04 00:27:12.756445 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-04-04 00:27:12.756475 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-04-04 00:27:14.534757 | orchestrator | 2025-04-04 00:27:14 | INFO  | Task cc2b4a46-27b9-41c2-b25a-395e6512b885 (resolvconf) was prepared for execution. 2025-04-04 00:27:17.852938 | orchestrator | 2025-04-04 00:27:14 | INFO  | It takes a moment until task cc2b4a46-27b9-41c2-b25a-395e6512b885 (resolvconf) has been started and output is visible here. 2025-04-04 00:27:17.853070 | orchestrator | 2025-04-04 00:27:17.853630 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-04-04 00:27:17.854968 | orchestrator | 2025-04-04 00:27:17.855635 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-04 00:27:17.857677 | orchestrator | Friday 04 April 2025 00:27:17 +0000 (0:00:00.092) 0:00:00.092 ********** 2025-04-04 00:27:22.111155 | orchestrator | ok: [testbed-manager] 2025-04-04 00:27:22.112324 | orchestrator | 2025-04-04 00:27:22.112723 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-04 00:27:22.113497 | orchestrator | Friday 04 April 2025 00:27:22 +0000 (0:00:04.261) 0:00:04.354 ********** 2025-04-04 00:27:22.168243 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:27:22.169243 | orchestrator | 2025-04-04 00:27:22.170193 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-04 00:27:22.171102 | orchestrator | Friday 04 April 2025 00:27:22 +0000 (0:00:00.058) 0:00:04.412 ********** 2025-04-04 00:27:22.256269 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-04-04 00:27:22.258160 | orchestrator | 2025-04-04 00:27:22.259082 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-04 00:27:22.259922 | orchestrator | Friday 04 April 2025 00:27:22 +0000 (0:00:00.088) 0:00:04.501 ********** 2025-04-04 00:27:22.347607 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-04-04 00:27:22.348813 | orchestrator | 2025-04-04 00:27:22.348882 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-04 00:27:22.349487 | orchestrator | Friday 04 April 2025 00:27:22 +0000 (0:00:00.090) 0:00:04.591 ********** 2025-04-04 00:27:23.723166 | orchestrator | ok: [testbed-manager] 2025-04-04 00:27:23.723344 | orchestrator | 2025-04-04 00:27:23.723556 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-04 00:27:23.723700 | orchestrator | Friday 04 April 2025 00:27:23 +0000 (0:00:01.373) 0:00:05.965 ********** 2025-04-04 00:27:23.795650 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:27:23.796906 | orchestrator | 2025-04-04 00:27:23.797109 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-04 00:27:23.798519 | orchestrator | Friday 04 April 2025 00:27:23 +0000 (0:00:00.074) 0:00:06.039 ********** 2025-04-04 00:27:24.341937 | orchestrator | ok: [testbed-manager] 2025-04-04 00:27:24.342635 | orchestrator | 2025-04-04 00:27:24.343407 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-04 00:27:24.343809 | orchestrator | Friday 04 April 2025 00:27:24 +0000 (0:00:00.542) 0:00:06.582 ********** 2025-04-04 00:27:24.412191 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:27:24.412480 | orchestrator | 2025-04-04 00:27:24.414098 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-04 00:27:24.414690 | orchestrator | Friday 04 April 2025 00:27:24 +0000 (0:00:00.072) 0:00:06.655 ********** 2025-04-04 00:27:25.047269 | orchestrator | changed: [testbed-manager] 2025-04-04 00:27:25.048180 | orchestrator | 2025-04-04 00:27:25.049752 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-04 00:27:25.050565 | orchestrator | Friday 04 April 2025 00:27:25 +0000 (0:00:00.636) 0:00:07.291 ********** 2025-04-04 00:27:26.287501 | orchestrator | changed: [testbed-manager] 2025-04-04 00:27:26.287915 | orchestrator | 2025-04-04 00:27:26.288823 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-04 00:27:26.288988 | orchestrator | Friday 04 April 2025 00:27:26 +0000 (0:00:01.238) 0:00:08.530 ********** 2025-04-04 00:27:27.380372 | orchestrator | ok: [testbed-manager] 2025-04-04 00:27:27.380837 | orchestrator | 2025-04-04 00:27:27.381661 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-04 00:27:27.382335 | orchestrator | Friday 04 April 2025 00:27:27 +0000 (0:00:01.092) 0:00:09.622 ********** 2025-04-04 00:27:27.478579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-04-04 00:27:27.479214 | orchestrator | 2025-04-04 00:27:27.479248 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-04 00:27:27.479299 | orchestrator | Friday 04 April 2025 00:27:27 +0000 (0:00:00.098) 0:00:09.721 ********** 2025-04-04 00:27:28.780802 | orchestrator | changed: [testbed-manager] 2025-04-04 00:27:28.781320 | orchestrator | 2025-04-04 00:27:28.781366 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:27:28.782223 | orchestrator | 2025-04-04 00:27:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:27:28.783398 | orchestrator | 2025-04-04 00:27:28 | INFO  | Please wait and do not abort execution. 2025-04-04 00:27:28.783433 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-04 00:27:28.784013 | orchestrator | 2025-04-04 00:27:28.784710 | orchestrator | Friday 04 April 2025 00:27:28 +0000 (0:00:01.303) 0:00:11.024 ********** 2025-04-04 00:27:28.785174 | orchestrator | =============================================================================== 2025-04-04 00:27:28.786398 | orchestrator | Gathering Facts --------------------------------------------------------- 4.26s 2025-04-04 00:27:28.787119 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.37s 2025-04-04 00:27:28.788765 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.30s 2025-04-04 00:27:28.788797 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.24s 2025-04-04 00:27:28.789380 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.09s 2025-04-04 00:27:28.789814 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.64s 2025-04-04 00:27:28.790631 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.54s 2025-04-04 00:27:28.791001 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2025-04-04 00:27:28.791711 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-04-04 00:27:28.792090 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-04-04 00:27:28.792375 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-04-04 00:27:28.792926 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-04-04 00:27:28.793101 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-04-04 00:27:29.299286 | orchestrator | + osism apply sshconfig 2025-04-04 00:27:30.944833 | orchestrator | 2025-04-04 00:27:30 | INFO  | Task 98fd7fe7-2c4e-444a-8f6e-5abb4cc2b1d7 (sshconfig) was prepared for execution. 2025-04-04 00:27:34.392568 | orchestrator | 2025-04-04 00:27:30 | INFO  | It takes a moment until task 98fd7fe7-2c4e-444a-8f6e-5abb4cc2b1d7 (sshconfig) has been started and output is visible here. 2025-04-04 00:27:34.392727 | orchestrator | 2025-04-04 00:27:34.393604 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-04-04 00:27:34.394390 | orchestrator | 2025-04-04 00:27:34.396137 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-04-04 00:27:34.397444 | orchestrator | Friday 04 April 2025 00:27:34 +0000 (0:00:00.140) 0:00:00.140 ********** 2025-04-04 00:27:35.016282 | orchestrator | ok: [testbed-manager] 2025-04-04 00:27:35.016995 | orchestrator | 2025-04-04 00:27:35.017043 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-04-04 00:27:35.017487 | orchestrator | Friday 04 April 2025 00:27:35 +0000 (0:00:00.624) 0:00:00.764 ********** 2025-04-04 00:27:35.571160 | orchestrator | changed: [testbed-manager] 2025-04-04 00:27:35.572413 | orchestrator | 2025-04-04 00:27:35.572448 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-04-04 00:27:35.572994 | orchestrator | Friday 04 April 2025 00:27:35 +0000 (0:00:00.554) 0:00:01.319 ********** 2025-04-04 00:27:41.828830 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-04-04 00:27:41.829526 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-04-04 00:27:41.830283 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-04-04 00:27:41.830886 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-04-04 00:27:41.831678 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-04-04 00:27:41.832005 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-04-04 00:27:41.832424 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-04-04 00:27:41.832963 | orchestrator | 2025-04-04 00:27:41.835386 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-04-04 00:27:41.836059 | orchestrator | Friday 04 April 2025 00:27:41 +0000 (0:00:06.258) 0:00:07.577 ********** 2025-04-04 00:27:41.906588 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:27:41.907011 | orchestrator | 2025-04-04 00:27:41.907062 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-04-04 00:27:42.516034 | orchestrator | Friday 04 April 2025 00:27:41 +0000 (0:00:00.076) 0:00:07.654 ********** 2025-04-04 00:27:42.516158 | orchestrator | changed: [testbed-manager] 2025-04-04 00:27:42.516971 | orchestrator | 2025-04-04 00:27:42.518109 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:27:42.518349 | orchestrator | 2025-04-04 00:27:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:27:42.518454 | orchestrator | 2025-04-04 00:27:42 | INFO  | Please wait and do not abort execution. 2025-04-04 00:27:42.519672 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-04 00:27:42.520526 | orchestrator | 2025-04-04 00:27:42.521059 | orchestrator | Friday 04 April 2025 00:27:42 +0000 (0:00:00.612) 0:00:08.267 ********** 2025-04-04 00:27:42.521594 | orchestrator | =============================================================================== 2025-04-04 00:27:42.523102 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.26s 2025-04-04 00:27:42.524415 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.62s 2025-04-04 00:27:42.525612 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2025-04-04 00:27:42.526802 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.55s 2025-04-04 00:27:42.527550 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-04-04 00:27:43.042540 | orchestrator | + osism apply known-hosts 2025-04-04 00:27:44.643742 | orchestrator | 2025-04-04 00:27:44 | INFO  | Task 7ec79f41-a7bb-4c75-86f0-53f964b51d1c (known-hosts) was prepared for execution. 2025-04-04 00:27:48.087507 | orchestrator | 2025-04-04 00:27:44 | INFO  | It takes a moment until task 7ec79f41-a7bb-4c75-86f0-53f964b51d1c (known-hosts) has been started and output is visible here. 2025-04-04 00:27:48.087666 | orchestrator | 2025-04-04 00:27:48.087807 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-04-04 00:27:48.089339 | orchestrator | 2025-04-04 00:27:48.089571 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-04-04 00:27:48.090304 | orchestrator | Friday 04 April 2025 00:27:48 +0000 (0:00:00.121) 0:00:00.121 ********** 2025-04-04 00:27:54.077212 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-04 00:27:54.078653 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-04 00:27:54.079323 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-04 00:27:54.079699 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-04 00:27:54.080025 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-04 00:27:54.083472 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-04 00:27:54.086083 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-04 00:27:54.086113 | orchestrator | 2025-04-04 00:27:54.086136 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-04-04 00:27:54.086744 | orchestrator | Friday 04 April 2025 00:27:54 +0000 (0:00:05.991) 0:00:06.113 ********** 2025-04-04 00:27:54.262231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-04 00:27:54.262378 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-04 00:27:54.262579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-04 00:27:54.262605 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-04 00:27:54.262626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-04 00:27:54.263299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-04 00:27:54.263710 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-04 00:27:54.264657 | orchestrator | 2025-04-04 00:27:54.264767 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-04 00:27:54.264792 | orchestrator | Friday 04 April 2025 00:27:54 +0000 (0:00:00.181) 0:00:06.295 ********** 2025-04-04 00:27:55.585847 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqE2BhnUZ+XbBIJ4gMH/Qaztzr0bz3kekUmBrvnpPhGpwiUYcdCpwxFPTK1FRu2VKWGOjj7xlRlknjc378rdWSHHBDz47M7jJve3Jxmv3ccQlorg/5XvHVjQCtdd4/M5XudLOvVUMbrGMTOmkl4/K+aqH2tmt2YzGZjtOPHqGBgnGieFLUSh4sijcFkU3tQOWzwGlkBwYxxCI/iVYpewh4SXM9DqmDUgUGho33tGV6QtnvQzjLqcLq34V6z785m0Tf04fYBZzJwegSdGl2RFZ9pUNEfZsDbd9I+1I4vpaTABRjTwB828FqFMXWRN/joe9lKEHqv5iVLFljCbl01xbCcTLDSyGowRx4a/7S4xZc3f6M9YZlQpE8fhH+4KL+rgZX3iZhiX7rJ6rABNfA7J50sL+F33ugH+bhFF4MwfGRHeffuMNQrtn3leKMOXDx4xh8Jd07D3d/YLmSHCicaPxD06X/CQDlKRelab0N3cLN7nEFzGWbh23+hLvrKOMybyE=) 2025-04-04 00:27:55.586212 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNMvAqT62wRuFAH6p3D1j3CEYgURiOqNTLRICr1ztHxz9M5AGpdg/8qUkIrr6astCZU+dVDrKT9ahyLB0B6g1FA=) 2025-04-04 00:27:55.586904 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN5gU2da/qK9lx2LNuyw4Cl+tngdoDJC3ok6fuXIplQB) 2025-04-04 00:27:55.587039 | orchestrator | 2025-04-04 00:27:55.587903 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-04 00:27:55.588238 | orchestrator | Friday 04 April 2025 00:27:55 +0000 (0:00:01.326) 0:00:07.622 ********** 2025-04-04 00:27:56.822576 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFjfH+Y3KomP03NXiqVX9958kPhIH0nDhdOis7/jMln5) 2025-04-04 00:27:56.823212 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXppXJcVZPBQHvxti0DRGjmuCChGE8raD04TroKMDrNi0V4/sEfMUtw5ra/xCqiihyP7TCnqDGc6wpl/+L1U6sSUIzT58VXOJuFhptiyAYuiPCtSV4mWzHE5VINOtBB73pLuaQwOyeVWY+2GzpclqLEZyIO63uXt3LgG+blZlQvAO13+k+AZ2QX0nv6LoZ15YUegUxorsF1y88rnmD6TF9UcauQ+ppar6wXJu3+vygrOWr/vH5VE7NagfYfHitkzPvRBvCePRgLMh31JCPNtEMSFIu+0BPOdOmyFLeI93cWStAwpckmkt3j60DoMETtf67EC5ykorXCnd9J1rbH4nKS733iY0CCEaiZb5r54si8eBt7XnPKn1ySk+05W+7GmCx08OQp0sNunbWf1gAIv4rrvdDbNoTFweLjBlGHADLOeU5NjrQmVuuYrebT8BY/zmb+FCWhushzbR7OSMgfLsZMj0pm/vt3hGCKjgtj6H80VjAYA8BTuIJ2aS1T/RwXoc=) 2025-04-04 00:27:56.823280 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIenO23DT1TtRLxrVnG9xRwXJxJ6CgBdTqZJsj8bkmrmn1NKAhOfSz9ArwSiAjswGsCWPlMLbP6iYCIL1jtMWIw=) 2025-04-04 00:27:56.824072 | orchestrator | 2025-04-04 00:27:56.824251 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-04 00:27:56.824942 | orchestrator | Friday 04 April 2025 00:27:56 +0000 (0:00:01.236) 0:00:08.858 ********** 2025-04-04 00:27:57.947293 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKVR3VR/ib0tRaxXTgt3PX99bONaSoAK5mUBr20tDox+G2w55ISEz0sPHRHmrZ0fdm3G/u/TzV5zAHUVrZ15JbSIV1c4CJ+oSqZlSN89D2GzDSAYnW4OrsI8MqOYCNkyqCuCKbCXVSk4vkYUZcf11EKr5aiANxEPahpliee0ew6EAyviHvfPyPpQuKO2o1W3Rt3okstrYZ/TnuSRaN42tEJwwBN2LlR7EZwVJQrntvqVlyyZp3t+OhCVmRCIEoEZcK2TyDaJLC9JPEt15Osmka0UPxyALqms4etG8X54GAn/cc6/3uohpHozUFiGOa+i7zhMN30saNyTLCR8kWeR4GGdr6my31wEO1NfxvtCxjit/fH9wvP5mMt423Bkz+Y+crrYs/A2a83q1HAauPy1o+XHzMoE2bTWOX89GU8BO0HQcXz66FNH/D0gORS2yx/1FGThfRhzstS6dx4x60asp+q26Xa97FNMfvZAqH2lPuCFb2HXbRLP1Vgm31oFMJ5gk=) 2025-04-04 00:27:57.948258 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBhVc97e+yR0jgmQRaQX4ARMc8DYjz/5OMLNlebqPUOojEeN7WTVaTWhG+a3UsfObU93fPQfjEgbjWfRQGcozKE=) 2025-04-04 00:27:57.948295 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICodePNedgCRIcXHR4dtxkRHF/K/NxnaQqa7iPu/RNh6) 2025-04-04 00:27:57.948848 | orchestrator | 2025-04-04 00:27:57.948900 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-04 00:27:57.949561 | orchestrator | Friday 04 April 2025 00:27:57 +0000 (0:00:01.124) 0:00:09.983 ********** 2025-04-04 00:27:59.119122 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9pARaBQF7lHnAA9uJazsbD6K7OX8A4QkV4JJjOUhCBpDwQrqDw1n10fIUd8am3C1MPjdozWTZi5VYNUI4pVwns809UryoMZ5QwI7N910fUaVC+Wu+Du1ZMXnFswg/Av4z1YuortLdg89mWdezMAcnB6AR53qJNZgEAxSnQ3OS+GKE7R/nV+pcnkqx7vVGGMs4wipYqAPbcVQykHjyGw3S/3xzoiXJ5S2qQHwGIpVetDcV3s1/6Wq9Y2KGvl4ZUyqiOkuUux1G4OZkUy8Ox4yK2qFNDL3YL0GQbbrdK0IPx2OSHIhA9CW4u0mPlsJrR4UvbDihAgThZrenuy7LmYWZDZvzYpxqq4L+O+Hh6VgiN2KT/tJx/9ykCAZQmOvlqKbuROfmVt117upinMQrwq1lZAm9/LRzwY6XHlYkYGauVm1+778D3W1JyL1TD1guBffN4HXT63JbtTslxaLtC/vzj4hP9uCRvIvrLabwR/SPsVfzZtFFcNE7iOIt4dE6sJU=) 2025-04-04 00:27:59.119531 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKUGJs1jKiJZy3F+cBFMy6Hbn0zf6MYkYuO5Octkj32D3blAZiRTQQ9/BfqxKfxMcj9q44Sqy2KlKCcm/vgIZF4=) 2025-04-04 00:27:59.119883 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE5d4vLw24pnekeN23i2TFzC88T8VUCS4SgKfCpeRQ78) 2025-04-04 00:27:59.120152 | orchestrator | 2025-04-04 00:27:59.120500 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-04 00:27:59.121273 | orchestrator | Friday 04 April 2025 00:27:59 +0000 (0:00:01.173) 0:00:11.156 ********** 2025-04-04 00:28:00.265495 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2+4ocQ7qVS4z50fPZ0zGGCprCWYRrSDlJptMiUQOEIgYqb1+XHRd3oCtni+7kmFMbf2ifT1eF5EaAR6pTFrjcTauZ6NmfqoWZOdKl70t7LwW6uDQ3DdQ9ojqXQn7hyrybQ7A2INl7aVP6dT5t0IprRS9RYSij2NSvxQ5P9BCbPE3e2sQ30eOizcqyCkVDSPijrIlGuhHWfxt0ZmIKuSQKO2iHmB9A61rGb3UXiPbgC9lx3MBf7JmKPihGPzZxlqhXpJqdO5qNEwu+RcuTos6B+SBbxbbdEJDdE3HfeyzverdcSU6ich5eBuMJ2VtlQh/GR8BO6QyV7ml+H/95+qxMPLV+cZ4tfE36k4Bf0ohTjCoadBobbOZdZmO11tsrnqiL/kMVVwkP4O9NXnsfIFi/3ywg6oo5GieKMo2yV6UDD7HoJvym1Y1z1pc/mfA8tTv+ivCDR7ExzlcvkqgPBaf7gkvT9YiNna8usMkeDJzZ9/y32eD4njoXLHWU7QkWAnM=) 2025-04-04 00:28:00.266082 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAkPASVUQKDHXq4Tekx4Xchm0fffLdNmFazNTJUyIZmvyLyfZJjfww1RRqGHZltsG8EijLKOa6+BQP9KXc3Zwn8=) 2025-04-04 00:28:00.266458 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDYsz2BjFrHxDQatZpq1vK2WEPTLerPgbO4IzpxTrxZO) 2025-04-04 00:28:00.266495 | orchestrator | 2025-04-04 00:28:00.267542 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-04 00:28:00.267996 | orchestrator | Friday 04 April 2025 00:28:00 +0000 (0:00:01.145) 0:00:12.301 ********** 2025-04-04 00:28:01.438335 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEQHJazHX7MGnJnqHk8LH038w5gu+TGriXLTosxEDyrXYY/OPgmGNh+KX/3nm38x+6wZv4f18xBTqXJY9Lny88M3Jt0iMSzccVTbmOJlu406evUB2beBo+Ni9xDZ+yECDHH2jtlG7S/g0BzRSePb2+HpvROz4BcAKi5mBwNORzeZKm94Et9IBIKgR6sW+O2hTmi0tkK8+BQLdgXDPkVYHV00R23UH5TNkBMvFTF4bSKeEzceVQJdEuH2zFpP5svNt/H5NGMO3Si9usyATKyOITRiGwkKGBky39VvqNcCQum/rw/uvkNqHAdiafSRgO1E9SaJWLC6BBm5yZTkTyxhNK5ixVJlqgnYAYGd27PoVsPG12+M7F0J9Z9prunS75s1ezFCb9yOOIH2IQz2uS5w+q9+Yv37rbGz46mvelZrtVwvW11/0Be88sJTJAnz2mtQDhmZ+T8PmrxFR/Vf9Q95iXMY1YitQDOLAmdgqHOFCxmpbz3JEQ66lIDiP+zkdnZ7E=) 2025-04-04 00:28:01.439104 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJJoNeDrgK9zWUdRp3hPDrwnncRPjLFTAr7SVGAavomwxzmIgPJHk//snnjmAdmnvTpffvhLzec5wzaGaDDajz4=) 2025-04-04 00:28:01.439683 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMOgD31SYP1d3O31Fh3fNUO8tQLVtDIlWeZKiJvffqQD) 2025-04-04 00:28:01.440498 | orchestrator | 2025-04-04 00:28:01.441224 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-04 00:28:01.441590 | orchestrator | Friday 04 April 2025 00:28:01 +0000 (0:00:01.171) 0:00:13.472 ********** 2025-04-04 00:28:02.601390 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+5GReGEAIoO1DrKFgsjL/FvcfD5/hdRARZjFoDzn/YveVS1ZonXIx7sGsMlpB5JcRifKD6Amupn9yGYshaRjJXgKWaQuZwdwY9r9aG5JV7qqBn47+9B8mhPplflyB6MmcoZcFpemquXXLBF2GzPCUAP6evkK0M4kHIEUOxOQxmhlTFz0bhe131vx+RYZPhB0SnW6zo0+a8ApqusTQGiWD99Lbrip+QXKQ+Go0ItMdg7Semz0tVTFR7iSatoEMxml5qeJfvi1tRChZRXqcv5aSD7SwUYAuvfiRk5sUjcLzqzJoBUtH4A0i6MbC8oo6356+PLiYDHYY9hEHSsR+wTorA4erS7SHHrL6bByV+PUzuIRLr/udR7Rq/joQUuFdrWzA6U50Y0i8VRJQ5zBuVyw2dZmBbDEjoPegkhrLrRoft5ewsiHLheW7Mr3zXNaWdvcCJmtxGHQJiiYVR7XGXeTqXK5+yyl8q4oQR3WTdnYWvs9RL9OnE6eJScbYbHK0bT0=) 2025-04-04 00:28:02.602197 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBOaDBjp7bri1z+b7Ryo0cVRzyiPMRvIbmqrd3RXMYD5vw9IMUDyF/w3dIiYu7XcJJ9U8BGoNpfh/QgUNow0UUo=) 2025-04-04 00:28:02.602246 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJJKuwVY/bk22OJNhnDTwjd9Kxtuu+yBIaHlRDwXmQHP) 2025-04-04 00:28:02.602649 | orchestrator | 2025-04-04 00:28:02.603788 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-04-04 00:28:02.603998 | orchestrator | Friday 04 April 2025 00:28:02 +0000 (0:00:01.165) 0:00:14.638 ********** 2025-04-04 00:28:07.974907 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-04 00:28:07.975875 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-04 00:28:07.975917 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-04 00:28:07.977562 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-04 00:28:07.980059 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-04 00:28:07.981179 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-04 00:28:07.981903 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-04 00:28:07.982645 | orchestrator | 2025-04-04 00:28:07.983428 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-04-04 00:28:07.983945 | orchestrator | Friday 04 April 2025 00:28:07 +0000 (0:00:05.372) 0:00:20.010 ********** 2025-04-04 00:28:08.150900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-04 00:28:08.151537 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-04 00:28:08.152431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-04 00:28:08.153753 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-04 00:28:08.154965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-04 00:28:08.155522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-04 00:28:08.156643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-04 00:28:08.157596 | orchestrator | 2025-04-04 00:28:08.158254 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-04 00:28:08.159227 | orchestrator | Friday 04 April 2025 00:28:08 +0000 (0:00:00.178) 0:00:20.189 ********** 2025-04-04 00:28:09.325793 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqE2BhnUZ+XbBIJ4gMH/Qaztzr0bz3kekUmBrvnpPhGpwiUYcdCpwxFPTK1FRu2VKWGOjj7xlRlknjc378rdWSHHBDz47M7jJve3Jxmv3ccQlorg/5XvHVjQCtdd4/M5XudLOvVUMbrGMTOmkl4/K+aqH2tmt2YzGZjtOPHqGBgnGieFLUSh4sijcFkU3tQOWzwGlkBwYxxCI/iVYpewh4SXM9DqmDUgUGho33tGV6QtnvQzjLqcLq34V6z785m0Tf04fYBZzJwegSdGl2RFZ9pUNEfZsDbd9I+1I4vpaTABRjTwB828FqFMXWRN/joe9lKEHqv5iVLFljCbl01xbCcTLDSyGowRx4a/7S4xZc3f6M9YZlQpE8fhH+4KL+rgZX3iZhiX7rJ6rABNfA7J50sL+F33ugH+bhFF4MwfGRHeffuMNQrtn3leKMOXDx4xh8Jd07D3d/YLmSHCicaPxD06X/CQDlKRelab0N3cLN7nEFzGWbh23+hLvrKOMybyE=) 2025-04-04 00:28:09.326137 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNMvAqT62wRuFAH6p3D1j3CEYgURiOqNTLRICr1ztHxz9M5AGpdg/8qUkIrr6astCZU+dVDrKT9ahyLB0B6g1FA=) 2025-04-04 00:28:09.326732 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN5gU2da/qK9lx2LNuyw4Cl+tngdoDJC3ok6fuXIplQB) 2025-04-04 00:28:09.327274 | orchestrator | 2025-04-04 00:28:09.328080 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-04 00:28:09.330178 | orchestrator | Friday 04 April 2025 00:28:09 +0000 (0:00:01.173) 0:00:21.362 ********** 2025-04-04 00:28:10.519654 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXppXJcVZPBQHvxti0DRGjmuCChGE8raD04TroKMDrNi0V4/sEfMUtw5ra/xCqiihyP7TCnqDGc6wpl/+L1U6sSUIzT58VXOJuFhptiyAYuiPCtSV4mWzHE5VINOtBB73pLuaQwOyeVWY+2GzpclqLEZyIO63uXt3LgG+blZlQvAO13+k+AZ2QX0nv6LoZ15YUegUxorsF1y88rnmD6TF9UcauQ+ppar6wXJu3+vygrOWr/vH5VE7NagfYfHitkzPvRBvCePRgLMh31JCPNtEMSFIu+0BPOdOmyFLeI93cWStAwpckmkt3j60DoMETtf67EC5ykorXCnd9J1rbH4nKS733iY0CCEaiZb5r54si8eBt7XnPKn1ySk+05W+7GmCx08OQp0sNunbWf1gAIv4rrvdDbNoTFweLjBlGHADLOeU5NjrQmVuuYrebT8BY/zmb+FCWhushzbR7OSMgfLsZMj0pm/vt3hGCKjgtj6H80VjAYA8BTuIJ2aS1T/RwXoc=) 2025-04-04 00:28:10.520090 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIenO23DT1TtRLxrVnG9xRwXJxJ6CgBdTqZJsj8bkmrmn1NKAhOfSz9ArwSiAjswGsCWPlMLbP6iYCIL1jtMWIw=) 2025-04-04 00:28:10.522934 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFjfH+Y3KomP03NXiqVX9958kPhIH0nDhdOis7/jMln5) 2025-04-04 00:28:10.523600 | orchestrator | 2025-04-04 00:28:10.523635 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-04 00:28:10.526078 | orchestrator | Friday 04 April 2025 00:28:10 +0000 (0:00:01.193) 0:00:22.555 ********** 2025-04-04 00:28:11.721071 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKVR3VR/ib0tRaxXTgt3PX99bONaSoAK5mUBr20tDox+G2w55ISEz0sPHRHmrZ0fdm3G/u/TzV5zAHUVrZ15JbSIV1c4CJ+oSqZlSN89D2GzDSAYnW4OrsI8MqOYCNkyqCuCKbCXVSk4vkYUZcf11EKr5aiANxEPahpliee0ew6EAyviHvfPyPpQuKO2o1W3Rt3okstrYZ/TnuSRaN42tEJwwBN2LlR7EZwVJQrntvqVlyyZp3t+OhCVmRCIEoEZcK2TyDaJLC9JPEt15Osmka0UPxyALqms4etG8X54GAn/cc6/3uohpHozUFiGOa+i7zhMN30saNyTLCR8kWeR4GGdr6my31wEO1NfxvtCxjit/fH9wvP5mMt423Bkz+Y+crrYs/A2a83q1HAauPy1o+XHzMoE2bTWOX89GU8BO0HQcXz66FNH/D0gORS2yx/1FGThfRhzstS6dx4x60asp+q26Xa97FNMfvZAqH2lPuCFb2HXbRLP1Vgm31oFMJ5gk=) 2025-04-04 00:28:11.722735 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBhVc97e+yR0jgmQRaQX4ARMc8DYjz/5OMLNlebqPUOojEeN7WTVaTWhG+a3UsfObU93fPQfjEgbjWfRQGcozKE=) 2025-04-04 00:28:11.722977 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICodePNedgCRIcXHR4dtxkRHF/K/NxnaQqa7iPu/RNh6) 2025-04-04 00:28:11.723633 | orchestrator | 2025-04-04 00:28:11.724346 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-04 00:28:11.724801 | orchestrator | Friday 04 April 2025 00:28:11 +0000 (0:00:01.200) 0:00:23.756 ********** 2025-04-04 00:28:12.870381 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9pARaBQF7lHnAA9uJazsbD6K7OX8A4QkV4JJjOUhCBpDwQrqDw1n10fIUd8am3C1MPjdozWTZi5VYNUI4pVwns809UryoMZ5QwI7N910fUaVC+Wu+Du1ZMXnFswg/Av4z1YuortLdg89mWdezMAcnB6AR53qJNZgEAxSnQ3OS+GKE7R/nV+pcnkqx7vVGGMs4wipYqAPbcVQykHjyGw3S/3xzoiXJ5S2qQHwGIpVetDcV3s1/6Wq9Y2KGvl4ZUyqiOkuUux1G4OZkUy8Ox4yK2qFNDL3YL0GQbbrdK0IPx2OSHIhA9CW4u0mPlsJrR4UvbDihAgThZrenuy7LmYWZDZvzYpxqq4L+O+Hh6VgiN2KT/tJx/9ykCAZQmOvlqKbuROfmVt117upinMQrwq1lZAm9/LRzwY6XHlYkYGauVm1+778D3W1JyL1TD1guBffN4HXT63JbtTslxaLtC/vzj4hP9uCRvIvrLabwR/SPsVfzZtFFcNE7iOIt4dE6sJU=) 2025-04-04 00:28:12.871147 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKUGJs1jKiJZy3F+cBFMy6Hbn0zf6MYkYuO5Octkj32D3blAZiRTQQ9/BfqxKfxMcj9q44Sqy2KlKCcm/vgIZF4=) 2025-04-04 00:28:12.871200 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE5d4vLw24pnekeN23i2TFzC88T8VUCS4SgKfCpeRQ78) 2025-04-04 00:28:12.871314 | orchestrator | 2025-04-04 00:28:12.871935 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-04 00:28:12.872044 | orchestrator | Friday 04 April 2025 00:28:12 +0000 (0:00:01.150) 0:00:24.907 ********** 2025-04-04 00:28:14.051090 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAkPASVUQKDHXq4Tekx4Xchm0fffLdNmFazNTJUyIZmvyLyfZJjfww1RRqGHZltsG8EijLKOa6+BQP9KXc3Zwn8=) 2025-04-04 00:28:14.051572 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2+4ocQ7qVS4z50fPZ0zGGCprCWYRrSDlJptMiUQOEIgYqb1+XHRd3oCtni+7kmFMbf2ifT1eF5EaAR6pTFrjcTauZ6NmfqoWZOdKl70t7LwW6uDQ3DdQ9ojqXQn7hyrybQ7A2INl7aVP6dT5t0IprRS9RYSij2NSvxQ5P9BCbPE3e2sQ30eOizcqyCkVDSPijrIlGuhHWfxt0ZmIKuSQKO2iHmB9A61rGb3UXiPbgC9lx3MBf7JmKPihGPzZxlqhXpJqdO5qNEwu+RcuTos6B+SBbxbbdEJDdE3HfeyzverdcSU6ich5eBuMJ2VtlQh/GR8BO6QyV7ml+H/95+qxMPLV+cZ4tfE36k4Bf0ohTjCoadBobbOZdZmO11tsrnqiL/kMVVwkP4O9NXnsfIFi/3ywg6oo5GieKMo2yV6UDD7HoJvym1Y1z1pc/mfA8tTv+ivCDR7ExzlcvkqgPBaf7gkvT9YiNna8usMkeDJzZ9/y32eD4njoXLHWU7QkWAnM=) 2025-04-04 00:28:14.052467 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDYsz2BjFrHxDQatZpq1vK2WEPTLerPgbO4IzpxTrxZO) 2025-04-04 00:28:14.053352 | orchestrator | 2025-04-04 00:28:14.053873 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-04 00:28:14.054381 | orchestrator | Friday 04 April 2025 00:28:14 +0000 (0:00:01.180) 0:00:26.088 ********** 2025-04-04 00:28:15.222308 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEQHJazHX7MGnJnqHk8LH038w5gu+TGriXLTosxEDyrXYY/OPgmGNh+KX/3nm38x+6wZv4f18xBTqXJY9Lny88M3Jt0iMSzccVTbmOJlu406evUB2beBo+Ni9xDZ+yECDHH2jtlG7S/g0BzRSePb2+HpvROz4BcAKi5mBwNORzeZKm94Et9IBIKgR6sW+O2hTmi0tkK8+BQLdgXDPkVYHV00R23UH5TNkBMvFTF4bSKeEzceVQJdEuH2zFpP5svNt/H5NGMO3Si9usyATKyOITRiGwkKGBky39VvqNcCQum/rw/uvkNqHAdiafSRgO1E9SaJWLC6BBm5yZTkTyxhNK5ixVJlqgnYAYGd27PoVsPG12+M7F0J9Z9prunS75s1ezFCb9yOOIH2IQz2uS5w+q9+Yv37rbGz46mvelZrtVwvW11/0Be88sJTJAnz2mtQDhmZ+T8PmrxFR/Vf9Q95iXMY1YitQDOLAmdgqHOFCxmpbz3JEQ66lIDiP+zkdnZ7E=) 2025-04-04 00:28:15.223059 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJJoNeDrgK9zWUdRp3hPDrwnncRPjLFTAr7SVGAavomwxzmIgPJHk//snnjmAdmnvTpffvhLzec5wzaGaDDajz4=) 2025-04-04 00:28:15.224191 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMOgD31SYP1d3O31Fh3fNUO8tQLVtDIlWeZKiJvffqQD) 2025-04-04 00:28:15.224557 | orchestrator | 2025-04-04 00:28:15.225333 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-04 00:28:15.227185 | orchestrator | Friday 04 April 2025 00:28:15 +0000 (0:00:01.170) 0:00:27.258 ********** 2025-04-04 00:28:16.403834 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBOaDBjp7bri1z+b7Ryo0cVRzyiPMRvIbmqrd3RXMYD5vw9IMUDyF/w3dIiYu7XcJJ9U8BGoNpfh/QgUNow0UUo=) 2025-04-04 00:28:16.404466 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+5GReGEAIoO1DrKFgsjL/FvcfD5/hdRARZjFoDzn/YveVS1ZonXIx7sGsMlpB5JcRifKD6Amupn9yGYshaRjJXgKWaQuZwdwY9r9aG5JV7qqBn47+9B8mhPplflyB6MmcoZcFpemquXXLBF2GzPCUAP6evkK0M4kHIEUOxOQxmhlTFz0bhe131vx+RYZPhB0SnW6zo0+a8ApqusTQGiWD99Lbrip+QXKQ+Go0ItMdg7Semz0tVTFR7iSatoEMxml5qeJfvi1tRChZRXqcv5aSD7SwUYAuvfiRk5sUjcLzqzJoBUtH4A0i6MbC8oo6356+PLiYDHYY9hEHSsR+wTorA4erS7SHHrL6bByV+PUzuIRLr/udR7Rq/joQUuFdrWzA6U50Y0i8VRJQ5zBuVyw2dZmBbDEjoPegkhrLrRoft5ewsiHLheW7Mr3zXNaWdvcCJmtxGHQJiiYVR7XGXeTqXK5+yyl8q4oQR3WTdnYWvs9RL9OnE6eJScbYbHK0bT0=) 2025-04-04 00:28:16.404508 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJJKuwVY/bk22OJNhnDTwjd9Kxtuu+yBIaHlRDwXmQHP) 2025-04-04 00:28:16.404881 | orchestrator | 2025-04-04 00:28:16.405510 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-04-04 00:28:16.406143 | orchestrator | Friday 04 April 2025 00:28:16 +0000 (0:00:01.181) 0:00:28.440 ********** 2025-04-04 00:28:16.587032 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-04 00:28:16.587139 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-04 00:28:16.587159 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-04 00:28:16.588709 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-04 00:28:16.589735 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-04 00:28:16.589763 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-04 00:28:16.590875 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-04 00:28:16.590906 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:28:16.591268 | orchestrator | 2025-04-04 00:28:16.591640 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-04-04 00:28:16.592374 | orchestrator | Friday 04 April 2025 00:28:16 +0000 (0:00:00.183) 0:00:28.623 ********** 2025-04-04 00:28:16.661145 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:28:16.662522 | orchestrator | 2025-04-04 00:28:16.663768 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-04-04 00:28:16.664497 | orchestrator | Friday 04 April 2025 00:28:16 +0000 (0:00:00.076) 0:00:28.699 ********** 2025-04-04 00:28:16.734569 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:28:16.735409 | orchestrator | 2025-04-04 00:28:16.735440 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-04-04 00:28:16.736973 | orchestrator | Friday 04 April 2025 00:28:16 +0000 (0:00:00.072) 0:00:28.771 ********** 2025-04-04 00:28:17.513526 | orchestrator | changed: [testbed-manager] 2025-04-04 00:28:17.514170 | orchestrator | 2025-04-04 00:28:17.514203 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:28:17.514226 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-04 00:28:17.516273 | orchestrator | 2025-04-04 00:28:17.516297 | orchestrator | 2025-04-04 00:28:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:28:17.516313 | orchestrator | 2025-04-04 00:28:17 | INFO  | Please wait and do not abort execution. 2025-04-04 00:28:17.516333 | orchestrator | Friday 04 April 2025 00:28:17 +0000 (0:00:00.779) 0:00:29.550 ********** 2025-04-04 00:28:17.518388 | orchestrator | =============================================================================== 2025-04-04 00:28:17.519653 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.99s 2025-04-04 00:28:17.520708 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.37s 2025-04-04 00:28:17.521155 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.33s 2025-04-04 00:28:17.522143 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2025-04-04 00:28:17.522891 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2025-04-04 00:28:17.524264 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-04-04 00:28:17.525038 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-04-04 00:28:17.525513 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-04-04 00:28:17.526172 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-04-04 00:28:17.526616 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-04-04 00:28:17.527088 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-04-04 00:28:17.527606 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-04-04 00:28:17.528042 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-04-04 00:28:17.528461 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-04-04 00:28:17.529294 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-04-04 00:28:17.529539 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-04-04 00:28:17.530068 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.78s 2025-04-04 00:28:17.530603 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2025-04-04 00:28:17.531314 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-04-04 00:28:17.531994 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-04-04 00:28:17.960243 | orchestrator | + osism apply squid 2025-04-04 00:28:19.536040 | orchestrator | 2025-04-04 00:28:19 | INFO  | Task 01465f5e-cf7c-4d22-ad40-8f64496336f2 (squid) was prepared for execution. 2025-04-04 00:28:23.102766 | orchestrator | 2025-04-04 00:28:19 | INFO  | It takes a moment until task 01465f5e-cf7c-4d22-ad40-8f64496336f2 (squid) has been started and output is visible here. 2025-04-04 00:28:23.102956 | orchestrator | 2025-04-04 00:28:23.104940 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-04-04 00:28:23.105518 | orchestrator | 2025-04-04 00:28:23.109657 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-04-04 00:28:23.109975 | orchestrator | Friday 04 April 2025 00:28:23 +0000 (0:00:00.117) 0:00:00.117 ********** 2025-04-04 00:28:23.209123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-04-04 00:28:23.209325 | orchestrator | 2025-04-04 00:28:23.210443 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-04-04 00:28:23.211236 | orchestrator | Friday 04 April 2025 00:28:23 +0000 (0:00:00.110) 0:00:00.228 ********** 2025-04-04 00:28:25.019847 | orchestrator | ok: [testbed-manager] 2025-04-04 00:28:25.020236 | orchestrator | 2025-04-04 00:28:25.020705 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-04-04 00:28:25.022173 | orchestrator | Friday 04 April 2025 00:28:25 +0000 (0:00:01.809) 0:00:02.038 ********** 2025-04-04 00:28:26.471107 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-04-04 00:28:26.471877 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-04-04 00:28:26.472474 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-04-04 00:28:26.473424 | orchestrator | 2025-04-04 00:28:26.474097 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-04-04 00:28:26.474935 | orchestrator | Friday 04 April 2025 00:28:26 +0000 (0:00:01.451) 0:00:03.489 ********** 2025-04-04 00:28:27.674726 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-04-04 00:28:27.675211 | orchestrator | 2025-04-04 00:28:27.676953 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-04-04 00:28:27.677374 | orchestrator | Friday 04 April 2025 00:28:27 +0000 (0:00:01.200) 0:00:04.690 ********** 2025-04-04 00:28:28.046773 | orchestrator | ok: [testbed-manager] 2025-04-04 00:28:28.047402 | orchestrator | 2025-04-04 00:28:28.047434 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-04-04 00:28:28.047766 | orchestrator | Friday 04 April 2025 00:28:28 +0000 (0:00:00.372) 0:00:05.063 ********** 2025-04-04 00:28:29.104089 | orchestrator | changed: [testbed-manager] 2025-04-04 00:28:29.104296 | orchestrator | 2025-04-04 00:28:29.104322 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-04-04 00:28:29.105051 | orchestrator | Friday 04 April 2025 00:28:29 +0000 (0:00:01.058) 0:00:06.121 ********** 2025-04-04 00:29:01.007648 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-04-04 00:29:01.008723 | orchestrator | ok: [testbed-manager] 2025-04-04 00:29:01.008763 | orchestrator | 2025-04-04 00:29:01.008779 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-04-04 00:29:01.008801 | orchestrator | Friday 04 April 2025 00:29:00 +0000 (0:00:31.898) 0:00:38.019 ********** 2025-04-04 00:29:13.523022 | orchestrator | changed: [testbed-manager] 2025-04-04 00:30:13.628547 | orchestrator | 2025-04-04 00:30:13.628695 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-04-04 00:30:13.628716 | orchestrator | Friday 04 April 2025 00:29:13 +0000 (0:00:12.516) 0:00:50.536 ********** 2025-04-04 00:30:13.628750 | orchestrator | Pausing for 60 seconds 2025-04-04 00:30:13.702123 | orchestrator | changed: [testbed-manager] 2025-04-04 00:30:13.702174 | orchestrator | 2025-04-04 00:30:13.702190 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-04-04 00:30:13.702205 | orchestrator | Friday 04 April 2025 00:30:13 +0000 (0:01:00.106) 0:01:50.642 ********** 2025-04-04 00:30:13.702232 | orchestrator | ok: [testbed-manager] 2025-04-04 00:30:13.702719 | orchestrator | 2025-04-04 00:30:13.703633 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-04-04 00:30:13.704427 | orchestrator | Friday 04 April 2025 00:30:13 +0000 (0:00:00.077) 0:01:50.720 ********** 2025-04-04 00:30:14.317476 | orchestrator | changed: [testbed-manager] 2025-04-04 00:30:14.318249 | orchestrator | 2025-04-04 00:30:14.319703 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:30:14.321130 | orchestrator | 2025-04-04 00:30:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:30:14.321201 | orchestrator | 2025-04-04 00:30:14 | INFO  | Please wait and do not abort execution. 2025-04-04 00:30:14.321226 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 00:30:14.322061 | orchestrator | 2025-04-04 00:30:14.322706 | orchestrator | Friday 04 April 2025 00:30:14 +0000 (0:00:00.615) 0:01:51.335 ********** 2025-04-04 00:30:14.323973 | orchestrator | =============================================================================== 2025-04-04 00:30:14.324226 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.11s 2025-04-04 00:30:14.324917 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.90s 2025-04-04 00:30:14.325550 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.52s 2025-04-04 00:30:14.325977 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.81s 2025-04-04 00:30:14.326650 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.45s 2025-04-04 00:30:14.327274 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.20s 2025-04-04 00:30:14.328016 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.06s 2025-04-04 00:30:14.328398 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.62s 2025-04-04 00:30:14.329144 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-04-04 00:30:14.329526 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.11s 2025-04-04 00:30:14.330010 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-04-04 00:30:14.826264 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-04 00:30:14.829956 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-04-04 00:30:14.829995 | orchestrator | ++ semver 8.1.0 9.0.0 2025-04-04 00:30:14.891464 | orchestrator | + [[ -1 -lt 0 ]] 2025-04-04 00:30:14.896821 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-04 00:30:14.896901 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-04-04 00:30:14.896935 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-04-04 00:30:14.901190 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-04-04 00:30:14.906782 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-04-04 00:30:16.466201 | orchestrator | 2025-04-04 00:30:16 | INFO  | Task 83c9180f-e945-459e-a72a-ff4665b8a477 (operator) was prepared for execution. 2025-04-04 00:30:16.466416 | orchestrator | 2025-04-04 00:30:16 | INFO  | It takes a moment until task 83c9180f-e945-459e-a72a-ff4665b8a477 (operator) has been started and output is visible here. 2025-04-04 00:30:19.668083 | orchestrator | 2025-04-04 00:30:19.668287 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-04-04 00:30:19.668320 | orchestrator | 2025-04-04 00:30:19.668624 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-04 00:30:19.670323 | orchestrator | Friday 04 April 2025 00:30:19 +0000 (0:00:00.101) 0:00:00.101 ********** 2025-04-04 00:30:23.330531 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:30:23.331628 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:30:23.331668 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:30:23.333149 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:30:23.334440 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:30:23.335532 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:30:23.336147 | orchestrator | 2025-04-04 00:30:23.341718 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-04-04 00:30:24.120213 | orchestrator | Friday 04 April 2025 00:30:23 +0000 (0:00:03.664) 0:00:03.766 ********** 2025-04-04 00:30:24.120356 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:30:24.120937 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:30:24.121233 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:30:24.121264 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:30:24.122381 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:30:24.126102 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:30:24.128709 | orchestrator | 2025-04-04 00:30:24.128736 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-04-04 00:30:24.128751 | orchestrator | 2025-04-04 00:30:24.128771 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-04 00:30:24.131362 | orchestrator | Friday 04 April 2025 00:30:24 +0000 (0:00:00.788) 0:00:04.554 ********** 2025-04-04 00:30:24.206367 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:30:24.234188 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:30:24.258479 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:30:24.308839 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:30:24.309387 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:30:24.310171 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:30:24.310956 | orchestrator | 2025-04-04 00:30:24.311234 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-04 00:30:24.311717 | orchestrator | Friday 04 April 2025 00:30:24 +0000 (0:00:00.190) 0:00:04.745 ********** 2025-04-04 00:30:24.385099 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:30:24.413434 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:30:24.438656 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:30:24.510509 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:30:24.511349 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:30:24.512359 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:30:24.513361 | orchestrator | 2025-04-04 00:30:24.514234 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-04 00:30:24.515223 | orchestrator | Friday 04 April 2025 00:30:24 +0000 (0:00:00.201) 0:00:04.947 ********** 2025-04-04 00:30:25.279595 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:30:25.280240 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:30:25.281698 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:30:25.282666 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:30:25.283650 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:30:25.286722 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:30:25.287122 | orchestrator | 2025-04-04 00:30:25.287441 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-04 00:30:25.289202 | orchestrator | Friday 04 April 2025 00:30:25 +0000 (0:00:00.763) 0:00:05.711 ********** 2025-04-04 00:30:26.401150 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:30:26.404261 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:30:26.404713 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:30:26.404735 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:30:26.404973 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:30:26.405713 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:30:26.410071 | orchestrator | 2025-04-04 00:30:27.855765 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-04 00:30:27.855863 | orchestrator | Friday 04 April 2025 00:30:26 +0000 (0:00:01.125) 0:00:06.836 ********** 2025-04-04 00:30:27.855890 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-04-04 00:30:27.855933 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-04-04 00:30:27.858485 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-04-04 00:30:27.858639 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-04-04 00:30:27.859342 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-04-04 00:30:27.859503 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-04-04 00:30:27.860484 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-04-04 00:30:27.860525 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-04-04 00:30:27.861112 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-04-04 00:30:27.861798 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-04-04 00:30:27.862115 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-04-04 00:30:27.863033 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-04-04 00:30:27.863445 | orchestrator | 2025-04-04 00:30:27.863463 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-04 00:30:27.863894 | orchestrator | Friday 04 April 2025 00:30:27 +0000 (0:00:01.453) 0:00:08.289 ********** 2025-04-04 00:30:29.145943 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:30:29.146271 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:30:29.147043 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:30:29.148167 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:30:29.148571 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:30:29.149506 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:30:29.149998 | orchestrator | 2025-04-04 00:30:29.150364 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-04 00:30:29.151394 | orchestrator | Friday 04 April 2025 00:30:29 +0000 (0:00:01.292) 0:00:09.582 ********** 2025-04-04 00:30:30.378622 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-04-04 00:30:30.378820 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-04-04 00:30:30.380291 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-04-04 00:30:30.415885 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-04-04 00:30:30.419203 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-04-04 00:30:30.420018 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-04-04 00:30:30.420045 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-04-04 00:30:30.420062 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-04-04 00:30:30.420079 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-04-04 00:30:30.420101 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-04-04 00:30:30.421047 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-04-04 00:30:30.421149 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-04-04 00:30:30.421828 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-04-04 00:30:30.421883 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-04-04 00:30:30.422478 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-04-04 00:30:30.423226 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-04-04 00:30:30.425457 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-04-04 00:30:30.426518 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-04-04 00:30:30.426552 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-04-04 00:30:30.426733 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-04-04 00:30:30.427299 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-04-04 00:30:30.427693 | orchestrator | 2025-04-04 00:30:30.430985 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-04 00:30:30.431711 | orchestrator | Friday 04 April 2025 00:30:30 +0000 (0:00:01.270) 0:00:10.853 ********** 2025-04-04 00:30:31.085545 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:30:31.086139 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:30:31.087202 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:30:31.089004 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:30:31.089249 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:30:31.091063 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:30:31.093495 | orchestrator | 2025-04-04 00:30:31.093970 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-04 00:30:31.098591 | orchestrator | Friday 04 April 2025 00:30:31 +0000 (0:00:00.665) 0:00:11.519 ********** 2025-04-04 00:30:31.168640 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:30:31.196726 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:30:31.225314 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:30:31.279058 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:30:31.280351 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:30:31.281513 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:30:31.282388 | orchestrator | 2025-04-04 00:30:31.282418 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-04 00:30:31.282607 | orchestrator | Friday 04 April 2025 00:30:31 +0000 (0:00:00.196) 0:00:11.716 ********** 2025-04-04 00:30:32.167237 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-04 00:30:32.167612 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:30:32.168820 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-04 00:30:32.169527 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:30:32.170557 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-04 00:30:32.171561 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:30:32.172324 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-04 00:30:32.174105 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:30:32.174338 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-04-04 00:30:32.174897 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:30:32.178723 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-04-04 00:30:32.179191 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:30:32.179784 | orchestrator | 2025-04-04 00:30:32.180442 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-04 00:30:32.182924 | orchestrator | Friday 04 April 2025 00:30:32 +0000 (0:00:00.887) 0:00:12.603 ********** 2025-04-04 00:30:32.224135 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:30:32.291354 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:30:32.320438 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:30:32.360932 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:30:32.361061 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:30:32.362133 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:30:32.362325 | orchestrator | 2025-04-04 00:30:32.366251 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-04 00:30:32.368034 | orchestrator | Friday 04 April 2025 00:30:32 +0000 (0:00:00.193) 0:00:12.796 ********** 2025-04-04 00:30:32.410725 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:30:32.442646 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:30:32.471330 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:30:32.514713 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:30:32.568606 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:30:32.569288 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:30:32.570702 | orchestrator | 2025-04-04 00:30:32.571514 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-04 00:30:32.575987 | orchestrator | Friday 04 April 2025 00:30:32 +0000 (0:00:00.208) 0:00:13.005 ********** 2025-04-04 00:30:32.645695 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:30:32.667786 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:30:32.703397 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:30:32.750004 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:30:32.751057 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:30:32.751435 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:30:32.753831 | orchestrator | 2025-04-04 00:30:33.605679 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-04 00:30:33.605801 | orchestrator | Friday 04 April 2025 00:30:32 +0000 (0:00:00.180) 0:00:13.186 ********** 2025-04-04 00:30:33.605834 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:30:33.606778 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:30:33.606807 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:30:33.606828 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:30:33.607604 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:30:33.608525 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:30:33.612055 | orchestrator | 2025-04-04 00:30:33.612497 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-04 00:30:33.613035 | orchestrator | Friday 04 April 2025 00:30:33 +0000 (0:00:00.850) 0:00:14.036 ********** 2025-04-04 00:30:33.689943 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:30:33.728650 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:30:33.760894 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:30:33.901904 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:30:33.902499 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:30:33.903647 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:30:33.905717 | orchestrator | 2025-04-04 00:30:33.906142 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:30:33.906179 | orchestrator | 2025-04-04 00:30:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:30:33.907637 | orchestrator | 2025-04-04 00:30:33 | INFO  | Please wait and do not abort execution. 2025-04-04 00:30:33.907667 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-04 00:30:33.908167 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-04 00:30:33.908997 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-04 00:30:33.909717 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-04 00:30:33.910610 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-04 00:30:33.911308 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-04 00:30:33.912251 | orchestrator | 2025-04-04 00:30:33.913153 | orchestrator | Friday 04 April 2025 00:30:33 +0000 (0:00:00.302) 0:00:14.339 ********** 2025-04-04 00:30:33.913974 | orchestrator | =============================================================================== 2025-04-04 00:30:33.915131 | orchestrator | Gathering Facts --------------------------------------------------------- 3.66s 2025-04-04 00:30:33.915622 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.45s 2025-04-04 00:30:33.916094 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.29s 2025-04-04 00:30:33.916473 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2025-04-04 00:30:33.917304 | orchestrator | osism.commons.operator : Create user ------------------------------------ 1.13s 2025-04-04 00:30:33.917719 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.89s 2025-04-04 00:30:33.918181 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.85s 2025-04-04 00:30:33.918575 | orchestrator | Do not require tty for all users ---------------------------------------- 0.79s 2025-04-04 00:30:33.919342 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.76s 2025-04-04 00:30:33.919599 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.67s 2025-04-04 00:30:33.920014 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.30s 2025-04-04 00:30:33.920396 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.21s 2025-04-04 00:30:33.920943 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2025-04-04 00:30:33.921454 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-04-04 00:30:33.921575 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.19s 2025-04-04 00:30:33.922157 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2025-04-04 00:30:33.922334 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2025-04-04 00:30:34.446095 | orchestrator | + osism apply --environment custom facts 2025-04-04 00:30:35.941436 | orchestrator | 2025-04-04 00:30:35 | INFO  | Trying to run play facts in environment custom 2025-04-04 00:30:35.991631 | orchestrator | 2025-04-04 00:30:35 | INFO  | Task 40cf99e7-e22e-4fe3-9d4e-2c4385fc5a7c (facts) was prepared for execution. 2025-04-04 00:30:35.992911 | orchestrator | 2025-04-04 00:30:35 | INFO  | It takes a moment until task 40cf99e7-e22e-4fe3-9d4e-2c4385fc5a7c (facts) has been started and output is visible here. 2025-04-04 00:30:39.278826 | orchestrator | 2025-04-04 00:30:39.279828 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-04-04 00:30:39.281610 | orchestrator | 2025-04-04 00:30:39.283518 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-04 00:30:39.283969 | orchestrator | Friday 04 April 2025 00:30:39 +0000 (0:00:00.089) 0:00:00.090 ********** 2025-04-04 00:30:40.624546 | orchestrator | ok: [testbed-manager] 2025-04-04 00:30:41.743241 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:30:41.748067 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:30:41.748956 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:30:41.748992 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:30:41.749760 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:30:41.750650 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:30:41.751635 | orchestrator | 2025-04-04 00:30:41.752455 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-04-04 00:30:41.753447 | orchestrator | Friday 04 April 2025 00:30:41 +0000 (0:00:02.467) 0:00:02.557 ********** 2025-04-04 00:30:43.005037 | orchestrator | ok: [testbed-manager] 2025-04-04 00:30:43.849231 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:30:43.849431 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:30:43.849458 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:30:43.853175 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:30:43.853974 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:30:43.854009 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:30:43.854512 | orchestrator | 2025-04-04 00:30:43.855038 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-04-04 00:30:43.855341 | orchestrator | 2025-04-04 00:30:43.855960 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-04 00:30:43.856350 | orchestrator | Friday 04 April 2025 00:30:43 +0000 (0:00:02.106) 0:00:04.664 ********** 2025-04-04 00:30:43.946995 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:30:43.947150 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:30:43.947530 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:30:43.947965 | orchestrator | 2025-04-04 00:30:43.948433 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-04 00:30:43.948528 | orchestrator | Friday 04 April 2025 00:30:43 +0000 (0:00:00.100) 0:00:04.765 ********** 2025-04-04 00:30:44.093906 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:30:44.094711 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:30:44.096050 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:30:44.099416 | orchestrator | 2025-04-04 00:30:44.100017 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-04 00:30:44.100559 | orchestrator | Friday 04 April 2025 00:30:44 +0000 (0:00:00.145) 0:00:04.910 ********** 2025-04-04 00:30:44.242661 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:30:44.245416 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:30:44.248288 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:30:44.251181 | orchestrator | 2025-04-04 00:30:44.252073 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-04 00:30:44.252869 | orchestrator | Friday 04 April 2025 00:30:44 +0000 (0:00:00.149) 0:00:05.060 ********** 2025-04-04 00:30:44.408835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-04 00:30:44.413070 | orchestrator | 2025-04-04 00:30:44.413579 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-04 00:30:44.415356 | orchestrator | Friday 04 April 2025 00:30:44 +0000 (0:00:00.163) 0:00:05.224 ********** 2025-04-04 00:30:44.850284 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:30:44.851079 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:30:44.854351 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:30:44.855449 | orchestrator | 2025-04-04 00:30:44.856496 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-04 00:30:44.856593 | orchestrator | Friday 04 April 2025 00:30:44 +0000 (0:00:00.443) 0:00:05.667 ********** 2025-04-04 00:30:44.981571 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:30:44.983063 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:30:44.984969 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:30:44.986169 | orchestrator | 2025-04-04 00:30:44.986989 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-04 00:30:44.990493 | orchestrator | Friday 04 April 2025 00:30:44 +0000 (0:00:00.131) 0:00:05.799 ********** 2025-04-04 00:30:46.201914 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:30:46.203869 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:30:46.204080 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:30:46.207911 | orchestrator | 2025-04-04 00:30:46.209539 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-04 00:30:46.211030 | orchestrator | Friday 04 April 2025 00:30:46 +0000 (0:00:01.217) 0:00:07.017 ********** 2025-04-04 00:30:46.756622 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:30:46.756754 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:30:46.756888 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:30:46.757706 | orchestrator | 2025-04-04 00:30:46.757937 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-04 00:30:46.763087 | orchestrator | Friday 04 April 2025 00:30:46 +0000 (0:00:00.554) 0:00:07.571 ********** 2025-04-04 00:30:47.981128 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:30:47.983534 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:30:47.983984 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:30:47.988230 | orchestrator | 2025-04-04 00:30:47.988838 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-04 00:30:47.992284 | orchestrator | Friday 04 April 2025 00:30:47 +0000 (0:00:01.223) 0:00:08.795 ********** 2025-04-04 00:31:02.297815 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:31:02.352124 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:31:02.352228 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:31:02.352247 | orchestrator | 2025-04-04 00:31:02.352264 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-04-04 00:31:02.352279 | orchestrator | Friday 04 April 2025 00:31:02 +0000 (0:00:14.310) 0:00:23.106 ********** 2025-04-04 00:31:02.352328 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:31:02.408422 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:31:02.409170 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:31:02.410960 | orchestrator | 2025-04-04 00:31:02.411718 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-04-04 00:31:02.412412 | orchestrator | Friday 04 April 2025 00:31:02 +0000 (0:00:00.118) 0:00:23.225 ********** 2025-04-04 00:31:10.732823 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:31:10.734977 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:31:10.735127 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:31:10.738159 | orchestrator | 2025-04-04 00:31:10.739057 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-04 00:31:10.739978 | orchestrator | Friday 04 April 2025 00:31:10 +0000 (0:00:08.322) 0:00:31.547 ********** 2025-04-04 00:31:11.201936 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:11.202279 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:11.202312 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:11.202351 | orchestrator | 2025-04-04 00:31:11.203358 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-04 00:31:11.204080 | orchestrator | Friday 04 April 2025 00:31:11 +0000 (0:00:00.470) 0:00:32.018 ********** 2025-04-04 00:31:15.259714 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-04-04 00:31:15.260191 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-04-04 00:31:15.261072 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-04-04 00:31:15.261604 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-04-04 00:31:15.261629 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-04-04 00:31:15.262494 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-04-04 00:31:15.262613 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-04-04 00:31:15.266112 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-04-04 00:31:15.266170 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-04-04 00:31:15.267192 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-04-04 00:31:15.268172 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-04-04 00:31:15.269080 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-04-04 00:31:15.269698 | orchestrator | 2025-04-04 00:31:15.270138 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-04 00:31:15.270444 | orchestrator | Friday 04 April 2025 00:31:15 +0000 (0:00:04.055) 0:00:36.074 ********** 2025-04-04 00:31:16.678324 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:16.678656 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:16.682105 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:16.682643 | orchestrator | 2025-04-04 00:31:16.683675 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-04 00:31:16.684306 | orchestrator | 2025-04-04 00:31:16.684987 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-04 00:31:16.685636 | orchestrator | Friday 04 April 2025 00:31:16 +0000 (0:00:01.419) 0:00:37.493 ********** 2025-04-04 00:31:18.228756 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:31:21.225376 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:31:21.226064 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:31:21.226988 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:21.228217 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:21.228470 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:21.229405 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:21.230532 | orchestrator | 2025-04-04 00:31:21.232122 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:31:21.232313 | orchestrator | 2025-04-04 00:31:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:31:21.232579 | orchestrator | 2025-04-04 00:31:21 | INFO  | Please wait and do not abort execution. 2025-04-04 00:31:21.233799 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 00:31:21.234368 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 00:31:21.235211 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 00:31:21.236254 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 00:31:21.236587 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:31:21.237426 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:31:21.237936 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:31:21.238339 | orchestrator | 2025-04-04 00:31:21.238836 | orchestrator | Friday 04 April 2025 00:31:21 +0000 (0:00:04.548) 0:00:42.041 ********** 2025-04-04 00:31:21.239595 | orchestrator | =============================================================================== 2025-04-04 00:31:21.239940 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.31s 2025-04-04 00:31:21.240252 | orchestrator | Install required packages (Debian) -------------------------------------- 8.32s 2025-04-04 00:31:21.240482 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.55s 2025-04-04 00:31:21.240810 | orchestrator | Copy fact files --------------------------------------------------------- 4.06s 2025-04-04 00:31:21.241190 | orchestrator | Create custom facts directory ------------------------------------------- 2.47s 2025-04-04 00:31:21.241561 | orchestrator | Copy fact file ---------------------------------------------------------- 2.11s 2025-04-04 00:31:21.241796 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.42s 2025-04-04 00:31:21.242172 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.22s 2025-04-04 00:31:21.242394 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.22s 2025-04-04 00:31:21.242694 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.55s 2025-04-04 00:31:21.243014 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2025-04-04 00:31:21.243419 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2025-04-04 00:31:21.244098 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-04-04 00:31:21.244205 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.15s 2025-04-04 00:31:21.244727 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.15s 2025-04-04 00:31:21.244807 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2025-04-04 00:31:21.245141 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-04-04 00:31:21.245448 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-04-04 00:31:21.822277 | orchestrator | + osism apply bootstrap 2025-04-04 00:31:23.421006 | orchestrator | 2025-04-04 00:31:23 | INFO  | Task 595307a0-2187-4eed-8fa9-042b28bad4e2 (bootstrap) was prepared for execution. 2025-04-04 00:31:26.995107 | orchestrator | 2025-04-04 00:31:23 | INFO  | It takes a moment until task 595307a0-2187-4eed-8fa9-042b28bad4e2 (bootstrap) has been started and output is visible here. 2025-04-04 00:31:26.995293 | orchestrator | 2025-04-04 00:31:26.995423 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-04-04 00:31:26.995445 | orchestrator | 2025-04-04 00:31:26.995465 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-04-04 00:31:26.995725 | orchestrator | Friday 04 April 2025 00:31:26 +0000 (0:00:00.164) 0:00:00.164 ********** 2025-04-04 00:31:27.082141 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:27.110798 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:27.142222 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:27.168625 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:27.261373 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:31:27.262139 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:31:27.263129 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:31:27.263318 | orchestrator | 2025-04-04 00:31:27.263612 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-04 00:31:27.264115 | orchestrator | 2025-04-04 00:31:27.264404 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-04 00:31:27.266284 | orchestrator | Friday 04 April 2025 00:31:27 +0000 (0:00:00.270) 0:00:00.434 ********** 2025-04-04 00:31:30.714334 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:31:30.714954 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:31:30.716761 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:31:30.717903 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:30.719532 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:30.720233 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:30.721094 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:30.722086 | orchestrator | 2025-04-04 00:31:30.722477 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-04-04 00:31:30.723840 | orchestrator | 2025-04-04 00:31:30.725081 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-04 00:31:30.725816 | orchestrator | Friday 04 April 2025 00:31:30 +0000 (0:00:03.452) 0:00:03.887 ********** 2025-04-04 00:31:30.828129 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-04 00:31:30.828574 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-04 00:31:30.828606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-04-04 00:31:30.870650 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-04 00:31:30.871012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-04 00:31:30.871317 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-04 00:31:30.871404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-04 00:31:30.871876 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-04 00:31:30.926249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-04 00:31:30.930126 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-04-04 00:31:30.930166 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-04 00:31:31.235833 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-04-04 00:31:31.235977 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-04 00:31:31.235995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-04 00:31:31.236010 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-04 00:31:31.236041 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:31:31.236816 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-04 00:31:31.238107 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-04 00:31:31.241486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-04 00:31:31.242168 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-04-04 00:31:31.242526 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-04 00:31:31.243324 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-04 00:31:31.243969 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-04 00:31:31.244617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-04 00:31:31.245254 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:31:31.246006 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-04 00:31:31.246991 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-04-04 00:31:31.247206 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-04 00:31:31.248005 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-04 00:31:31.248558 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-04 00:31:31.249265 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-04 00:31:31.249733 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-04 00:31:31.250341 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:31:31.250936 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-04 00:31:31.251494 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-04-04 00:31:31.252071 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-04 00:31:31.252511 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-04 00:31:31.253247 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-04 00:31:31.254070 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-04 00:31:31.257109 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-04 00:31:31.257582 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-04 00:31:31.257607 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-04 00:31:31.257623 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:31:31.257638 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-04 00:31:31.257653 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-04 00:31:31.257683 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-04 00:31:31.257704 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-04 00:31:31.258309 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:31:31.258987 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-04 00:31:31.259663 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-04 00:31:31.260324 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-04 00:31:31.260976 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-04 00:31:31.261442 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:31:31.262367 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-04 00:31:31.262742 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-04 00:31:31.266496 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:31:31.267002 | orchestrator | 2025-04-04 00:31:31.267429 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-04-04 00:31:31.267874 | orchestrator | 2025-04-04 00:31:31.268440 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-04-04 00:31:31.268902 | orchestrator | Friday 04 April 2025 00:31:31 +0000 (0:00:00.520) 0:00:04.407 ********** 2025-04-04 00:31:31.322326 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:31.350459 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:31.376666 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:31.404876 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:31.474449 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:31:31.474684 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:31:31.475519 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:31:31.476307 | orchestrator | 2025-04-04 00:31:31.477393 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-04-04 00:31:31.477940 | orchestrator | Friday 04 April 2025 00:31:31 +0000 (0:00:00.239) 0:00:04.647 ********** 2025-04-04 00:31:32.864292 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:32.865094 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:31:32.865173 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:32.867380 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:31:32.868963 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:32.869716 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:32.870537 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:31:32.871424 | orchestrator | 2025-04-04 00:31:32.872134 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-04-04 00:31:32.872911 | orchestrator | Friday 04 April 2025 00:31:32 +0000 (0:00:01.388) 0:00:06.035 ********** 2025-04-04 00:31:34.424438 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:34.424633 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:34.424968 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:34.425004 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:34.425325 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:31:34.425661 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:31:34.426210 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:31:34.426838 | orchestrator | 2025-04-04 00:31:34.428147 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-04-04 00:31:34.429089 | orchestrator | Friday 04 April 2025 00:31:34 +0000 (0:00:01.555) 0:00:07.591 ********** 2025-04-04 00:31:34.726764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:31:34.728031 | orchestrator | 2025-04-04 00:31:34.728836 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-04-04 00:31:34.729965 | orchestrator | Friday 04 April 2025 00:31:34 +0000 (0:00:00.308) 0:00:07.899 ********** 2025-04-04 00:31:36.981682 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:31:36.982976 | orchestrator | changed: [testbed-manager] 2025-04-04 00:31:36.984054 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:31:36.984182 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:31:36.984964 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:31:36.986514 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:31:36.987430 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:31:36.988475 | orchestrator | 2025-04-04 00:31:36.989163 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-04-04 00:31:36.989787 | orchestrator | Friday 04 April 2025 00:31:36 +0000 (0:00:02.252) 0:00:10.152 ********** 2025-04-04 00:31:37.063351 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:31:37.284484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:31:37.285040 | orchestrator | 2025-04-04 00:31:37.286257 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-04-04 00:31:37.288142 | orchestrator | Friday 04 April 2025 00:31:37 +0000 (0:00:00.301) 0:00:10.454 ********** 2025-04-04 00:31:38.242605 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:31:38.243489 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:31:38.243524 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:31:38.244114 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:31:38.245004 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:31:38.245952 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:31:38.246341 | orchestrator | 2025-04-04 00:31:38.246821 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-04-04 00:31:38.247572 | orchestrator | Friday 04 April 2025 00:31:38 +0000 (0:00:00.957) 0:00:11.411 ********** 2025-04-04 00:31:38.322139 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:31:38.935666 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:31:38.935832 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:31:38.936413 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:31:38.937348 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:31:38.939315 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:31:38.939692 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:31:38.941436 | orchestrator | 2025-04-04 00:31:38.941720 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-04-04 00:31:38.942684 | orchestrator | Friday 04 April 2025 00:31:38 +0000 (0:00:00.696) 0:00:12.107 ********** 2025-04-04 00:31:39.040074 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:31:39.069903 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:31:39.104502 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:31:39.424492 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:31:39.425539 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:31:39.429034 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:31:39.430076 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:39.431176 | orchestrator | 2025-04-04 00:31:39.431603 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-04 00:31:39.432449 | orchestrator | Friday 04 April 2025 00:31:39 +0000 (0:00:00.488) 0:00:12.596 ********** 2025-04-04 00:31:39.500349 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:31:39.535489 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:31:39.570693 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:31:39.593707 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:31:39.661436 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:31:39.663007 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:31:39.663346 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:31:39.664110 | orchestrator | 2025-04-04 00:31:39.664539 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-04 00:31:39.665028 | orchestrator | Friday 04 April 2025 00:31:39 +0000 (0:00:00.238) 0:00:12.834 ********** 2025-04-04 00:31:40.008604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:31:40.008756 | orchestrator | 2025-04-04 00:31:40.009195 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-04 00:31:40.009722 | orchestrator | Friday 04 April 2025 00:31:40 +0000 (0:00:00.346) 0:00:13.181 ********** 2025-04-04 00:31:40.355412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:31:40.359023 | orchestrator | 2025-04-04 00:31:40.359530 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-04 00:31:40.360270 | orchestrator | Friday 04 April 2025 00:31:40 +0000 (0:00:00.344) 0:00:13.525 ********** 2025-04-04 00:31:41.857919 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:41.858530 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:31:41.863038 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:41.863200 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:31:41.863221 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:41.863234 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:41.863251 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:31:41.864375 | orchestrator | 2025-04-04 00:31:41.864454 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-04 00:31:41.867043 | orchestrator | Friday 04 April 2025 00:31:41 +0000 (0:00:01.502) 0:00:15.028 ********** 2025-04-04 00:31:41.945706 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:31:41.973385 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:31:41.999154 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:31:42.038320 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:31:42.119895 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:31:42.122528 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:31:42.123832 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:31:42.124043 | orchestrator | 2025-04-04 00:31:42.124287 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-04 00:31:42.124507 | orchestrator | Friday 04 April 2025 00:31:42 +0000 (0:00:00.264) 0:00:15.292 ********** 2025-04-04 00:31:42.835348 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:42.837626 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:42.837661 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:42.838091 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:42.838801 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:31:42.839569 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:31:42.840110 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:31:42.841236 | orchestrator | 2025-04-04 00:31:42.841684 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-04 00:31:42.842976 | orchestrator | Friday 04 April 2025 00:31:42 +0000 (0:00:00.712) 0:00:16.005 ********** 2025-04-04 00:31:42.939988 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:31:42.972405 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:31:43.003719 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:31:43.036171 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:31:43.114692 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:31:43.115073 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:31:43.115104 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:31:43.115502 | orchestrator | 2025-04-04 00:31:43.115754 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-04 00:31:43.116196 | orchestrator | Friday 04 April 2025 00:31:43 +0000 (0:00:00.282) 0:00:16.288 ********** 2025-04-04 00:31:43.817328 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:43.817516 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:31:43.818309 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:31:43.818465 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:31:43.820298 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:31:43.821428 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:31:43.821473 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:31:43.822934 | orchestrator | 2025-04-04 00:31:43.824265 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-04 00:31:43.825183 | orchestrator | Friday 04 April 2025 00:31:43 +0000 (0:00:00.700) 0:00:16.988 ********** 2025-04-04 00:31:45.012608 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:45.013033 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:31:45.013068 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:31:45.013090 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:31:45.013255 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:31:45.013941 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:31:45.014653 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:31:45.015194 | orchestrator | 2025-04-04 00:31:45.015679 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-04 00:31:45.016327 | orchestrator | Friday 04 April 2025 00:31:44 +0000 (0:00:01.185) 0:00:18.174 ********** 2025-04-04 00:31:46.225510 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:31:46.225679 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:31:46.226722 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:46.227073 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:46.229615 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:46.231702 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:31:46.232453 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:46.233030 | orchestrator | 2025-04-04 00:31:46.233897 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-04 00:31:46.234556 | orchestrator | Friday 04 April 2025 00:31:46 +0000 (0:00:01.220) 0:00:19.394 ********** 2025-04-04 00:31:46.586516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:31:46.587756 | orchestrator | 2025-04-04 00:31:46.589309 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-04 00:31:46.590299 | orchestrator | Friday 04 April 2025 00:31:46 +0000 (0:00:00.363) 0:00:19.758 ********** 2025-04-04 00:31:46.677276 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:31:48.232012 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:31:48.232432 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:31:48.233058 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:31:48.233658 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:31:48.234520 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:31:48.239406 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:31:48.239787 | orchestrator | 2025-04-04 00:31:48.240277 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-04 00:31:48.240823 | orchestrator | Friday 04 April 2025 00:31:48 +0000 (0:00:01.645) 0:00:21.403 ********** 2025-04-04 00:31:48.314535 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:48.346510 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:48.375145 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:48.400577 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:48.474744 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:31:48.474939 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:31:48.475670 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:31:48.476245 | orchestrator | 2025-04-04 00:31:48.476915 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-04 00:31:48.477273 | orchestrator | Friday 04 April 2025 00:31:48 +0000 (0:00:00.243) 0:00:21.647 ********** 2025-04-04 00:31:48.575568 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:48.610110 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:48.637145 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:48.667540 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:48.752389 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:31:48.753189 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:31:48.753224 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:31:48.754361 | orchestrator | 2025-04-04 00:31:48.755518 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-04 00:31:48.757353 | orchestrator | Friday 04 April 2025 00:31:48 +0000 (0:00:00.274) 0:00:21.922 ********** 2025-04-04 00:31:48.835345 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:48.883119 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:48.916621 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:48.951264 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:49.021055 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:31:49.021998 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:31:49.023219 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:31:49.026401 | orchestrator | 2025-04-04 00:31:49.322063 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-04 00:31:49.322124 | orchestrator | Friday 04 April 2025 00:31:49 +0000 (0:00:00.271) 0:00:22.193 ********** 2025-04-04 00:31:49.322152 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:31:49.322506 | orchestrator | 2025-04-04 00:31:49.324054 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-04 00:31:49.324254 | orchestrator | Friday 04 April 2025 00:31:49 +0000 (0:00:00.300) 0:00:22.494 ********** 2025-04-04 00:31:49.919749 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:49.920205 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:49.920598 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:49.921658 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:49.922251 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:31:49.923139 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:31:49.923486 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:31:49.924301 | orchestrator | 2025-04-04 00:31:49.925366 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-04 00:31:49.925927 | orchestrator | Friday 04 April 2025 00:31:49 +0000 (0:00:00.595) 0:00:23.089 ********** 2025-04-04 00:31:50.018961 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:31:50.048769 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:31:50.083831 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:31:50.109445 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:31:50.185549 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:31:50.185836 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:31:50.187474 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:31:50.190157 | orchestrator | 2025-04-04 00:31:50.190434 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-04 00:31:50.191089 | orchestrator | Friday 04 April 2025 00:31:50 +0000 (0:00:00.266) 0:00:23.356 ********** 2025-04-04 00:31:51.394325 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:51.396462 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:51.401062 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:51.401133 | orchestrator | changed: [testbed-manager] 2025-04-04 00:31:51.401571 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:31:51.402122 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:31:51.403020 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:31:51.404032 | orchestrator | 2025-04-04 00:31:51.404810 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-04 00:31:51.405761 | orchestrator | Friday 04 April 2025 00:31:51 +0000 (0:00:01.207) 0:00:24.563 ********** 2025-04-04 00:31:51.977951 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:51.979132 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:51.983792 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:51.984094 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:51.984127 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:31:51.985987 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:31:51.986281 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:31:51.987586 | orchestrator | 2025-04-04 00:31:51.989097 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-04 00:31:51.989639 | orchestrator | Friday 04 April 2025 00:31:51 +0000 (0:00:00.585) 0:00:25.149 ********** 2025-04-04 00:31:53.192105 | orchestrator | ok: [testbed-manager] 2025-04-04 00:31:53.195038 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:31:53.195287 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:31:53.196368 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:31:53.196419 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:31:53.198217 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:31:53.199097 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:31:53.199724 | orchestrator | 2025-04-04 00:31:53.200829 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-04 00:31:53.202199 | orchestrator | Friday 04 April 2025 00:31:53 +0000 (0:00:01.212) 0:00:26.362 ********** 2025-04-04 00:32:07.108579 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:32:07.109604 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:32:07.109638 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:32:07.109661 | orchestrator | changed: [testbed-manager] 2025-04-04 00:32:07.109976 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:32:07.110510 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:32:07.111105 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:32:07.112101 | orchestrator | 2025-04-04 00:32:07.112364 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-04-04 00:32:07.113154 | orchestrator | Friday 04 April 2025 00:32:07 +0000 (0:00:13.911) 0:00:40.273 ********** 2025-04-04 00:32:07.177971 | orchestrator | ok: [testbed-manager] 2025-04-04 00:32:07.210281 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:32:07.247648 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:32:07.274117 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:32:07.356666 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:32:07.357439 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:32:07.358960 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:32:07.359394 | orchestrator | 2025-04-04 00:32:07.360201 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-04-04 00:32:07.361408 | orchestrator | Friday 04 April 2025 00:32:07 +0000 (0:00:00.255) 0:00:40.528 ********** 2025-04-04 00:32:07.444662 | orchestrator | ok: [testbed-manager] 2025-04-04 00:32:07.477879 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:32:07.503725 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:32:07.537444 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:32:07.618207 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:32:07.619454 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:32:07.619886 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:32:07.621211 | orchestrator | 2025-04-04 00:32:07.622133 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-04-04 00:32:07.623040 | orchestrator | Friday 04 April 2025 00:32:07 +0000 (0:00:00.262) 0:00:40.790 ********** 2025-04-04 00:32:07.731802 | orchestrator | ok: [testbed-manager] 2025-04-04 00:32:07.769144 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:32:07.797721 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:32:07.830256 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:32:07.919307 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:32:07.920217 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:32:07.921168 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:32:07.921982 | orchestrator | 2025-04-04 00:32:07.922821 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-04-04 00:32:07.923545 | orchestrator | Friday 04 April 2025 00:32:07 +0000 (0:00:00.300) 0:00:41.091 ********** 2025-04-04 00:32:08.272452 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:32:08.273264 | orchestrator | 2025-04-04 00:32:08.273434 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-04-04 00:32:08.273963 | orchestrator | Friday 04 April 2025 00:32:08 +0000 (0:00:00.352) 0:00:41.443 ********** 2025-04-04 00:32:09.884897 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:32:09.885868 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:32:09.885905 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:32:09.885921 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:32:09.885943 | orchestrator | ok: [testbed-manager] 2025-04-04 00:32:09.886483 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:32:09.887538 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:32:09.888180 | orchestrator | 2025-04-04 00:32:09.888560 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-04-04 00:32:09.888968 | orchestrator | Friday 04 April 2025 00:32:09 +0000 (0:00:01.611) 0:00:43.054 ********** 2025-04-04 00:32:10.925768 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:32:10.926009 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:32:10.927088 | orchestrator | changed: [testbed-manager] 2025-04-04 00:32:10.927120 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:32:10.928227 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:32:10.928605 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:32:10.929303 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:32:10.929771 | orchestrator | 2025-04-04 00:32:10.930090 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-04-04 00:32:10.930521 | orchestrator | Friday 04 April 2025 00:32:10 +0000 (0:00:01.041) 0:00:44.096 ********** 2025-04-04 00:32:11.746091 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:32:11.746320 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:32:11.746437 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:32:11.746734 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:32:11.747449 | orchestrator | ok: [testbed-manager] 2025-04-04 00:32:11.747765 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:32:11.748196 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:32:11.748551 | orchestrator | 2025-04-04 00:32:11.748998 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-04-04 00:32:11.749314 | orchestrator | Friday 04 April 2025 00:32:11 +0000 (0:00:00.822) 0:00:44.919 ********** 2025-04-04 00:32:12.100193 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:32:12.101156 | orchestrator | 2025-04-04 00:32:12.101201 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-04-04 00:32:12.102013 | orchestrator | Friday 04 April 2025 00:32:12 +0000 (0:00:00.352) 0:00:45.271 ********** 2025-04-04 00:32:13.166540 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:32:13.166747 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:32:13.166777 | orchestrator | changed: [testbed-manager] 2025-04-04 00:32:13.167260 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:32:13.169280 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:32:13.169726 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:32:13.170407 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:32:13.170899 | orchestrator | 2025-04-04 00:32:13.171434 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-04-04 00:32:13.172187 | orchestrator | Friday 04 April 2025 00:32:13 +0000 (0:00:01.066) 0:00:46.338 ********** 2025-04-04 00:32:13.250171 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:32:13.283422 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:32:13.315540 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:32:13.349082 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:32:13.542374 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:32:13.542484 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:32:13.543082 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:32:13.543468 | orchestrator | 2025-04-04 00:32:13.543985 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-04-04 00:32:13.547384 | orchestrator | Friday 04 April 2025 00:32:13 +0000 (0:00:00.377) 0:00:46.715 ********** 2025-04-04 00:32:27.628139 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:32:27.628265 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:32:27.628277 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:32:27.628285 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:32:27.628292 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:32:27.628302 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:32:27.628947 | orchestrator | changed: [testbed-manager] 2025-04-04 00:32:27.629437 | orchestrator | 2025-04-04 00:32:27.630185 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-04-04 00:32:27.631072 | orchestrator | Friday 04 April 2025 00:32:27 +0000 (0:00:14.077) 0:01:00.792 ********** 2025-04-04 00:32:28.685155 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:32:28.688585 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:32:28.689154 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:32:28.689196 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:32:28.689211 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:32:28.689226 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:32:28.689240 | orchestrator | ok: [testbed-manager] 2025-04-04 00:32:28.689255 | orchestrator | 2025-04-04 00:32:28.689278 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-04-04 00:32:28.689574 | orchestrator | Friday 04 April 2025 00:32:28 +0000 (0:00:01.062) 0:01:01.855 ********** 2025-04-04 00:32:29.628641 | orchestrator | ok: [testbed-manager] 2025-04-04 00:32:29.629022 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:32:29.629106 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:32:29.630210 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:32:29.630917 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:32:29.631399 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:32:29.632421 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:32:29.632754 | orchestrator | 2025-04-04 00:32:29.633346 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-04-04 00:32:29.633769 | orchestrator | Friday 04 April 2025 00:32:29 +0000 (0:00:00.945) 0:01:02.800 ********** 2025-04-04 00:32:29.729594 | orchestrator | ok: [testbed-manager] 2025-04-04 00:32:29.771971 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:32:29.801254 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:32:29.843314 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:32:29.912727 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:32:29.913041 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:32:29.914124 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:32:29.915070 | orchestrator | 2025-04-04 00:32:29.916018 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-04-04 00:32:29.916413 | orchestrator | Friday 04 April 2025 00:32:29 +0000 (0:00:00.283) 0:01:03.084 ********** 2025-04-04 00:32:30.002486 | orchestrator | ok: [testbed-manager] 2025-04-04 00:32:30.037275 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:32:30.062076 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:32:30.093188 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:32:30.167790 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:32:30.169287 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:32:30.170483 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:32:30.171671 | orchestrator | 2025-04-04 00:32:30.173235 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-04-04 00:32:30.174155 | orchestrator | Friday 04 April 2025 00:32:30 +0000 (0:00:00.255) 0:01:03.340 ********** 2025-04-04 00:32:30.508247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:32:30.508791 | orchestrator | 2025-04-04 00:32:30.510864 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-04-04 00:32:30.511435 | orchestrator | Friday 04 April 2025 00:32:30 +0000 (0:00:00.340) 0:01:03.680 ********** 2025-04-04 00:32:32.158457 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:32:32.159562 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:32:32.160146 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:32:32.162245 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:32:32.163939 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:32:32.165002 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:32:32.166618 | orchestrator | ok: [testbed-manager] 2025-04-04 00:32:32.167172 | orchestrator | 2025-04-04 00:32:32.171138 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-04-04 00:32:32.172667 | orchestrator | Friday 04 April 2025 00:32:32 +0000 (0:00:01.646) 0:01:05.326 ********** 2025-04-04 00:32:32.806673 | orchestrator | changed: [testbed-manager] 2025-04-04 00:32:32.808352 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:32:32.808793 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:32:32.809418 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:32:32.810882 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:32:32.811360 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:32:32.811634 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:32:32.812523 | orchestrator | 2025-04-04 00:32:32.812975 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-04-04 00:32:32.813506 | orchestrator | Friday 04 April 2025 00:32:32 +0000 (0:00:00.650) 0:01:05.977 ********** 2025-04-04 00:32:32.888934 | orchestrator | ok: [testbed-manager] 2025-04-04 00:32:32.923304 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:32:32.951023 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:32:32.983217 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:32:33.071180 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:32:33.072016 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:32:33.073459 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:32:33.074754 | orchestrator | 2025-04-04 00:32:33.075576 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-04-04 00:32:33.076108 | orchestrator | Friday 04 April 2025 00:32:33 +0000 (0:00:00.266) 0:01:06.243 ********** 2025-04-04 00:32:34.204777 | orchestrator | ok: [testbed-manager] 2025-04-04 00:32:34.207814 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:32:34.210508 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:32:34.210546 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:32:34.210569 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:32:34.211670 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:32:34.213523 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:32:34.214475 | orchestrator | 2025-04-04 00:32:34.214950 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-04-04 00:32:34.215879 | orchestrator | Friday 04 April 2025 00:32:34 +0000 (0:00:01.131) 0:01:07.375 ********** 2025-04-04 00:32:35.820360 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:32:35.820603 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:32:35.820638 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:32:35.821430 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:32:35.823096 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:32:35.824100 | orchestrator | changed: [testbed-manager] 2025-04-04 00:32:35.824594 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:32:35.825150 | orchestrator | 2025-04-04 00:32:35.825772 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-04-04 00:32:35.826112 | orchestrator | Friday 04 April 2025 00:32:35 +0000 (0:00:01.614) 0:01:08.989 ********** 2025-04-04 00:32:38.060805 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:32:38.061595 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:32:38.061630 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:32:38.061652 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:32:38.062911 | orchestrator | ok: [testbed-manager] 2025-04-04 00:32:38.063179 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:32:38.063648 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:32:38.064081 | orchestrator | 2025-04-04 00:32:38.064812 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-04-04 00:32:38.065358 | orchestrator | Friday 04 April 2025 00:32:38 +0000 (0:00:02.241) 0:01:11.230 ********** 2025-04-04 00:33:15.208732 | orchestrator | ok: [testbed-manager] 2025-04-04 00:33:15.210461 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:33:15.210498 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:33:15.210518 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:33:15.210779 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:33:15.210804 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:33:15.211417 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:33:15.212208 | orchestrator | 2025-04-04 00:33:15.212873 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-04-04 00:33:15.213819 | orchestrator | Friday 04 April 2025 00:33:15 +0000 (0:00:37.146) 0:01:48.377 ********** 2025-04-04 00:34:16.317496 | orchestrator | changed: [testbed-manager] 2025-04-04 00:34:16.317743 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:34:16.317770 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:34:16.317804 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:34:16.317819 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:34:16.317840 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:34:16.318841 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:34:16.319416 | orchestrator | 2025-04-04 00:34:16.320349 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-04-04 00:34:16.320650 | orchestrator | Friday 04 April 2025 00:34:16 +0000 (0:01:01.103) 0:02:49.480 ********** 2025-04-04 00:34:17.970467 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:34:17.970575 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:34:17.971629 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:34:17.972456 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:34:17.975226 | orchestrator | ok: [testbed-manager] 2025-04-04 00:34:17.975402 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:34:17.980100 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:34:17.981430 | orchestrator | 2025-04-04 00:34:17.981765 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-04-04 00:34:17.982340 | orchestrator | Friday 04 April 2025 00:34:17 +0000 (0:00:01.658) 0:02:51.139 ********** 2025-04-04 00:34:32.763601 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:34:32.764179 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:34:32.764216 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:34:32.764241 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:34:32.765163 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:34:32.765646 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:34:32.766479 | orchestrator | changed: [testbed-manager] 2025-04-04 00:34:32.766676 | orchestrator | 2025-04-04 00:34:32.767448 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-04-04 00:34:32.767910 | orchestrator | Friday 04 April 2025 00:34:32 +0000 (0:00:14.787) 0:03:05.926 ********** 2025-04-04 00:34:33.164833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-04-04 00:34:33.165406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-04-04 00:34:33.165487 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-04-04 00:34:33.165939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-04-04 00:34:33.166352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-04-04 00:34:33.167248 | orchestrator | 2025-04-04 00:34:33.167321 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-04-04 00:34:33.167345 | orchestrator | Friday 04 April 2025 00:34:33 +0000 (0:00:00.410) 0:03:06.337 ********** 2025-04-04 00:34:33.234728 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-04 00:34:33.266127 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-04 00:34:33.266170 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:34:33.266363 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-04 00:34:33.297284 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:34:33.335243 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:34:33.359666 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-04 00:34:33.359693 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:34:33.981064 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-04 00:34:33.981418 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-04 00:34:33.982418 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-04 00:34:33.982790 | orchestrator | 2025-04-04 00:34:33.983476 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-04-04 00:34:33.983598 | orchestrator | Friday 04 April 2025 00:34:33 +0000 (0:00:00.811) 0:03:07.148 ********** 2025-04-04 00:34:34.063862 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-04 00:34:34.066500 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-04 00:34:34.067441 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-04 00:34:34.067467 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-04 00:34:34.067487 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-04 00:34:34.067503 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-04 00:34:34.067545 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-04 00:34:34.067560 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-04 00:34:34.067579 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-04 00:34:34.067838 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-04 00:34:34.068154 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-04 00:34:34.123150 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-04 00:34:34.123292 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-04 00:34:34.123431 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-04 00:34:34.123838 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-04 00:34:34.124149 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-04 00:34:34.124445 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-04 00:34:34.124793 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-04 00:34:34.125305 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-04 00:34:34.125508 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-04 00:34:34.126430 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-04 00:34:34.126584 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-04 00:34:34.126882 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-04 00:34:34.127161 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-04 00:34:34.127360 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-04 00:34:34.178432 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-04 00:34:34.179835 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:34:34.180583 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-04 00:34:34.181077 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-04 00:34:34.181735 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-04 00:34:34.182349 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-04 00:34:34.182572 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-04 00:34:34.183495 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-04 00:34:34.228508 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-04 00:34:34.229823 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:34:34.231409 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-04 00:34:34.232179 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-04 00:34:34.232208 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-04 00:34:34.232669 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-04 00:34:34.233184 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-04 00:34:34.237189 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-04 00:34:34.238392 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-04 00:34:34.272516 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:34:41.676150 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:34:41.679044 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-04 00:34:41.679118 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-04 00:34:41.681727 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-04 00:34:41.683188 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-04 00:34:41.683889 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-04 00:34:41.685306 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-04 00:34:41.688219 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-04 00:34:41.688398 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-04 00:34:41.688419 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-04 00:34:41.688436 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-04 00:34:41.689643 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-04 00:34:41.690795 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-04 00:34:41.691886 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-04 00:34:41.692666 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-04 00:34:41.693622 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-04 00:34:41.694410 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-04 00:34:41.695475 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-04 00:34:41.696499 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-04 00:34:41.697302 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-04 00:34:41.697905 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-04 00:34:41.698710 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-04 00:34:41.699817 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-04 00:34:41.700580 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-04 00:34:41.701950 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-04 00:34:41.702756 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-04 00:34:41.702786 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-04 00:34:41.703495 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-04 00:34:41.704181 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-04 00:34:41.704828 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-04 00:34:41.705469 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-04 00:34:41.706161 | orchestrator | 2025-04-04 00:34:41.706666 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-04-04 00:34:41.707473 | orchestrator | Friday 04 April 2025 00:34:41 +0000 (0:00:07.697) 0:03:14.846 ********** 2025-04-04 00:34:44.220949 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-04 00:34:44.221566 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-04 00:34:44.222370 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-04 00:34:44.223176 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-04 00:34:44.223546 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-04 00:34:44.224113 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-04 00:34:44.224932 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-04 00:34:44.225237 | orchestrator | 2025-04-04 00:34:44.225711 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-04-04 00:34:44.226095 | orchestrator | Friday 04 April 2025 00:34:44 +0000 (0:00:02.543) 0:03:17.390 ********** 2025-04-04 00:34:44.288652 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-04 00:34:44.319348 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:34:44.402438 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-04 00:34:45.677859 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:34:45.678543 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-04 00:34:45.679525 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:34:45.681167 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-04 00:34:45.683542 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:34:45.683583 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-04 00:34:45.683599 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-04 00:34:45.683621 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-04 00:34:45.684703 | orchestrator | 2025-04-04 00:34:45.685958 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-04-04 00:34:45.687265 | orchestrator | Friday 04 April 2025 00:34:45 +0000 (0:00:01.457) 0:03:18.847 ********** 2025-04-04 00:34:45.732131 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-04 00:34:45.769550 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:34:45.854408 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-04 00:34:47.251210 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:34:47.252995 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-04 00:34:47.254804 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:34:47.254843 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-04 00:34:47.256393 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:34:47.257295 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-04 00:34:47.258124 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-04 00:34:47.259439 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-04 00:34:47.260725 | orchestrator | 2025-04-04 00:34:47.261126 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-04-04 00:34:47.263063 | orchestrator | Friday 04 April 2025 00:34:47 +0000 (0:00:01.573) 0:03:20.421 ********** 2025-04-04 00:34:47.349676 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:34:47.381466 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:34:47.410885 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:34:47.437138 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:34:47.595576 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:34:47.596626 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:34:47.597820 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:34:47.600269 | orchestrator | 2025-04-04 00:34:53.026214 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-04-04 00:34:53.026350 | orchestrator | Friday 04 April 2025 00:34:47 +0000 (0:00:00.346) 0:03:20.768 ********** 2025-04-04 00:34:53.026387 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:34:53.027943 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:34:53.028995 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:34:53.029640 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:34:53.030446 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:34:53.031256 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:34:53.032558 | orchestrator | ok: [testbed-manager] 2025-04-04 00:34:53.033805 | orchestrator | 2025-04-04 00:34:53.035057 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-04-04 00:34:53.036030 | orchestrator | Friday 04 April 2025 00:34:53 +0000 (0:00:05.428) 0:03:26.196 ********** 2025-04-04 00:34:53.110391 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-04-04 00:34:53.169643 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:34:53.169747 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-04-04 00:34:53.170128 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-04-04 00:34:53.211326 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:34:53.213934 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-04-04 00:34:53.250670 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:34:53.252888 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-04-04 00:34:53.286242 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:34:53.371617 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-04-04 00:34:53.374931 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:34:53.375857 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:34:53.378441 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-04-04 00:34:53.379484 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:34:53.379525 | orchestrator | 2025-04-04 00:34:53.379594 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-04-04 00:34:53.380855 | orchestrator | Friday 04 April 2025 00:34:53 +0000 (0:00:00.343) 0:03:26.540 ********** 2025-04-04 00:34:54.480744 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-04-04 00:34:54.481424 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-04-04 00:34:54.483355 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-04-04 00:34:54.483418 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-04-04 00:34:54.484707 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-04-04 00:34:54.485615 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-04-04 00:34:54.486636 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-04-04 00:34:54.487451 | orchestrator | 2025-04-04 00:34:54.488427 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-04-04 00:34:54.489125 | orchestrator | Friday 04 April 2025 00:34:54 +0000 (0:00:01.111) 0:03:27.651 ********** 2025-04-04 00:34:54.981154 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:34:54.981363 | orchestrator | 2025-04-04 00:34:54.982724 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-04-04 00:34:54.983975 | orchestrator | Friday 04 April 2025 00:34:54 +0000 (0:00:00.501) 0:03:28.153 ********** 2025-04-04 00:34:56.372760 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:34:56.373589 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:34:56.373634 | orchestrator | ok: [testbed-manager] 2025-04-04 00:34:56.373696 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:34:56.374976 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:34:56.377825 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:34:56.378094 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:34:56.379848 | orchestrator | 2025-04-04 00:34:56.382317 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-04-04 00:34:56.382802 | orchestrator | Friday 04 April 2025 00:34:56 +0000 (0:00:01.388) 0:03:29.541 ********** 2025-04-04 00:34:57.007931 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:34:57.008750 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:34:57.008883 | orchestrator | ok: [testbed-manager] 2025-04-04 00:34:57.008901 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:34:57.008930 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:34:57.008995 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:34:57.010403 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:34:57.010708 | orchestrator | 2025-04-04 00:34:57.011796 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-04-04 00:34:57.011863 | orchestrator | Friday 04 April 2025 00:34:56 +0000 (0:00:00.633) 0:03:30.175 ********** 2025-04-04 00:34:57.694988 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:34:57.695181 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:34:57.696110 | orchestrator | changed: [testbed-manager] 2025-04-04 00:34:57.696819 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:34:57.698253 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:34:57.698988 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:34:57.700487 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:34:57.700972 | orchestrator | 2025-04-04 00:34:57.701946 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-04-04 00:34:57.702757 | orchestrator | Friday 04 April 2025 00:34:57 +0000 (0:00:00.689) 0:03:30.865 ********** 2025-04-04 00:34:58.299688 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:34:58.301164 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:34:58.302647 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:34:58.304549 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:34:58.305192 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:34:58.305721 | orchestrator | ok: [testbed-manager] 2025-04-04 00:34:58.306310 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:34:58.306975 | orchestrator | 2025-04-04 00:34:58.307577 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-04-04 00:34:58.308370 | orchestrator | Friday 04 April 2025 00:34:58 +0000 (0:00:00.604) 0:03:31.470 ********** 2025-04-04 00:34:59.246639 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743725042.823912, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-04 00:34:59.248534 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743725049.7092586, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-04 00:34:59.249495 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743725063.0140018, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-04 00:34:59.249527 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743725059.3303592, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-04 00:34:59.249551 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743725042.7304986, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-04 00:34:59.249866 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743725046.9769402, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-04 00:34:59.250770 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743725062.123594, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-04 00:34:59.251130 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743725010.1963592, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-04 00:34:59.251703 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743724991.5720928, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-04 00:34:59.252660 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743725000.7166722, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-04 00:34:59.254592 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743725014.355002, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-04 00:34:59.258204 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743724991.7259603, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-04 00:34:59.258236 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743724996.118477, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-04 00:34:59.258260 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743725086.526921, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-04 00:35:00.268514 | orchestrator | 2025-04-04 00:35:00.268640 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-04-04 00:35:00.268660 | orchestrator | Friday 04 April 2025 00:34:59 +0000 (0:00:00.948) 0:03:32.418 ********** 2025-04-04 00:35:00.268691 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:35:00.269422 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:35:00.270406 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:35:00.271138 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:35:00.271691 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:35:00.272752 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:35:00.273594 | orchestrator | changed: [testbed-manager] 2025-04-04 00:35:00.274428 | orchestrator | 2025-04-04 00:35:00.275209 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-04-04 00:35:00.276168 | orchestrator | Friday 04 April 2025 00:35:00 +0000 (0:00:01.017) 0:03:33.436 ********** 2025-04-04 00:35:01.399676 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:35:01.400525 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:35:01.402863 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:35:01.404548 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:35:01.405458 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:35:01.406644 | orchestrator | changed: [testbed-manager] 2025-04-04 00:35:01.406826 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:35:01.407482 | orchestrator | 2025-04-04 00:35:01.407881 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-04-04 00:35:01.408661 | orchestrator | Friday 04 April 2025 00:35:01 +0000 (0:00:01.132) 0:03:34.569 ********** 2025-04-04 00:35:01.477333 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:35:01.513163 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:35:01.570519 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:35:01.610553 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:35:01.647707 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:35:01.721785 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:35:01.722554 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:35:01.722586 | orchestrator | 2025-04-04 00:35:01.723658 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-04-04 00:35:01.724492 | orchestrator | Friday 04 April 2025 00:35:01 +0000 (0:00:00.322) 0:03:34.891 ********** 2025-04-04 00:35:02.528307 | orchestrator | ok: [testbed-manager] 2025-04-04 00:35:02.529325 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:35:02.530787 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:35:02.532076 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:35:02.532491 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:35:02.536979 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:35:02.537173 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:35:02.537443 | orchestrator | 2025-04-04 00:35:02.537481 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-04-04 00:35:02.537676 | orchestrator | Friday 04 April 2025 00:35:02 +0000 (0:00:00.804) 0:03:35.696 ********** 2025-04-04 00:35:03.048600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:35:10.350314 | orchestrator | 2025-04-04 00:35:10.350500 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-04-04 00:35:10.350525 | orchestrator | Friday 04 April 2025 00:35:03 +0000 (0:00:00.517) 0:03:36.214 ********** 2025-04-04 00:35:10.350556 | orchestrator | ok: [testbed-manager] 2025-04-04 00:35:10.350631 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:35:10.352417 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:35:10.353895 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:35:10.354972 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:35:10.356636 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:35:10.358567 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:35:10.359368 | orchestrator | 2025-04-04 00:35:10.360534 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-04-04 00:35:10.361788 | orchestrator | Friday 04 April 2025 00:35:10 +0000 (0:00:07.306) 0:03:43.520 ********** 2025-04-04 00:35:11.806158 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:35:11.806384 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:35:11.806810 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:35:11.807760 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:35:11.808585 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:35:11.808879 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:35:11.809173 | orchestrator | ok: [testbed-manager] 2025-04-04 00:35:11.810483 | orchestrator | 2025-04-04 00:35:11.811550 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-04-04 00:35:11.812172 | orchestrator | Friday 04 April 2025 00:35:11 +0000 (0:00:01.455) 0:03:44.976 ********** 2025-04-04 00:35:12.937816 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:35:12.938000 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:35:12.938110 | orchestrator | ok: [testbed-manager] 2025-04-04 00:35:12.938663 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:35:12.938847 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:35:12.940000 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:35:12.940827 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:35:12.941339 | orchestrator | 2025-04-04 00:35:12.944767 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-04-04 00:35:12.945133 | orchestrator | Friday 04 April 2025 00:35:12 +0000 (0:00:01.131) 0:03:46.107 ********** 2025-04-04 00:35:13.423630 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:35:13.424287 | orchestrator | 2025-04-04 00:35:13.424329 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-04-04 00:35:13.428092 | orchestrator | Friday 04 April 2025 00:35:13 +0000 (0:00:00.486) 0:03:46.594 ********** 2025-04-04 00:35:23.169114 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:35:23.169313 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:35:23.169338 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:35:23.169360 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:35:23.171358 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:35:23.172373 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:35:23.173777 | orchestrator | changed: [testbed-manager] 2025-04-04 00:35:23.174463 | orchestrator | 2025-04-04 00:35:23.175274 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-04-04 00:35:23.176060 | orchestrator | Friday 04 April 2025 00:35:23 +0000 (0:00:09.735) 0:03:56.330 ********** 2025-04-04 00:35:23.838983 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:35:23.840498 | orchestrator | changed: [testbed-manager] 2025-04-04 00:35:23.841458 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:35:23.843862 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:35:23.843924 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:35:23.844858 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:35:23.845412 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:35:23.846183 | orchestrator | 2025-04-04 00:35:23.846883 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-04-04 00:35:23.847357 | orchestrator | Friday 04 April 2025 00:35:23 +0000 (0:00:00.681) 0:03:57.011 ********** 2025-04-04 00:35:25.002799 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:35:25.003611 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:35:25.003669 | orchestrator | changed: [testbed-manager] 2025-04-04 00:35:25.005466 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:35:25.005674 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:35:25.005736 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:35:25.007269 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:35:25.007801 | orchestrator | 2025-04-04 00:35:25.008625 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-04-04 00:35:25.009355 | orchestrator | Friday 04 April 2025 00:35:24 +0000 (0:00:01.158) 0:03:58.169 ********** 2025-04-04 00:35:26.064794 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:35:26.068437 | orchestrator | changed: [testbed-manager] 2025-04-04 00:35:26.069180 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:35:26.070159 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:35:26.070659 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:35:26.072250 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:35:26.072977 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:35:26.073962 | orchestrator | 2025-04-04 00:35:26.075561 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-04-04 00:35:26.076704 | orchestrator | Friday 04 April 2025 00:35:26 +0000 (0:00:01.064) 0:03:59.234 ********** 2025-04-04 00:35:26.201439 | orchestrator | ok: [testbed-manager] 2025-04-04 00:35:26.255701 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:35:26.294458 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:35:26.361543 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:35:26.448999 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:35:26.450636 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:35:26.450736 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:35:26.452450 | orchestrator | 2025-04-04 00:35:26.453140 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-04-04 00:35:26.453634 | orchestrator | Friday 04 April 2025 00:35:26 +0000 (0:00:00.385) 0:03:59.620 ********** 2025-04-04 00:35:26.577657 | orchestrator | ok: [testbed-manager] 2025-04-04 00:35:26.613003 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:35:26.651952 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:35:26.692367 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:35:26.772894 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:35:26.773246 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:35:26.773725 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:35:26.774662 | orchestrator | 2025-04-04 00:35:26.775109 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-04-04 00:35:26.775635 | orchestrator | Friday 04 April 2025 00:35:26 +0000 (0:00:00.325) 0:03:59.945 ********** 2025-04-04 00:35:26.888336 | orchestrator | ok: [testbed-manager] 2025-04-04 00:35:26.943515 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:35:26.980647 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:35:27.023492 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:35:27.111946 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:35:27.112494 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:35:27.112529 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:35:27.112544 | orchestrator | 2025-04-04 00:35:27.112566 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-04-04 00:35:27.114351 | orchestrator | Friday 04 April 2025 00:35:27 +0000 (0:00:00.334) 0:04:00.279 ********** 2025-04-04 00:35:32.479864 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:35:32.481232 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:35:32.481948 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:35:32.482987 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:35:32.484198 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:35:32.485121 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:35:32.485578 | orchestrator | ok: [testbed-manager] 2025-04-04 00:35:32.486236 | orchestrator | 2025-04-04 00:35:32.486708 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-04-04 00:35:32.487456 | orchestrator | Friday 04 April 2025 00:35:32 +0000 (0:00:05.370) 0:04:05.650 ********** 2025-04-04 00:35:32.959843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:35:32.960019 | orchestrator | 2025-04-04 00:35:32.963681 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-04-04 00:35:32.964762 | orchestrator | Friday 04 April 2025 00:35:32 +0000 (0:00:00.476) 0:04:06.127 ********** 2025-04-04 00:35:33.035358 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-04-04 00:35:33.036837 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-04-04 00:35:33.082726 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:35:33.089249 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-04-04 00:35:33.139308 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-04-04 00:35:33.140328 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-04-04 00:35:33.142116 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-04-04 00:35:33.194712 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:35:33.198327 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-04-04 00:35:33.198387 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-04-04 00:35:33.236442 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:35:33.238717 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-04-04 00:35:33.239303 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-04-04 00:35:33.281396 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:35:33.281676 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-04-04 00:35:33.370633 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:35:33.371536 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-04-04 00:35:33.371958 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:35:33.372704 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-04-04 00:35:33.373539 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-04-04 00:35:33.374169 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:35:33.374950 | orchestrator | 2025-04-04 00:35:33.375382 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-04-04 00:35:33.375802 | orchestrator | Friday 04 April 2025 00:35:33 +0000 (0:00:00.416) 0:04:06.544 ********** 2025-04-04 00:35:33.831490 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:35:33.831641 | orchestrator | 2025-04-04 00:35:33.832499 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-04-04 00:35:33.833130 | orchestrator | Friday 04 April 2025 00:35:33 +0000 (0:00:00.458) 0:04:07.002 ********** 2025-04-04 00:35:33.904399 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-04-04 00:35:33.906175 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-04-04 00:35:33.965014 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:35:33.965479 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-04-04 00:35:34.011401 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:35:34.063624 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-04-04 00:35:34.064121 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:35:34.064863 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-04-04 00:35:34.114460 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:35:34.193276 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:35:34.196237 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-04-04 00:35:34.196324 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:35:34.197497 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-04-04 00:35:34.198860 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:35:34.199698 | orchestrator | 2025-04-04 00:35:34.200626 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-04-04 00:35:34.201436 | orchestrator | Friday 04 April 2025 00:35:34 +0000 (0:00:00.361) 0:04:07.363 ********** 2025-04-04 00:35:34.701167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:35:34.702224 | orchestrator | 2025-04-04 00:35:34.704275 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-04-04 00:36:09.765488 | orchestrator | Friday 04 April 2025 00:35:34 +0000 (0:00:00.507) 0:04:07.871 ********** 2025-04-04 00:36:09.765657 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:36:09.767182 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:36:09.767220 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:36:09.767235 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:36:09.767248 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:36:09.767268 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:36:16.777259 | orchestrator | changed: [testbed-manager] 2025-04-04 00:36:16.777409 | orchestrator | 2025-04-04 00:36:16.777433 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-04-04 00:36:16.777476 | orchestrator | Friday 04 April 2025 00:36:09 +0000 (0:00:35.059) 0:04:42.930 ********** 2025-04-04 00:36:16.777510 | orchestrator | changed: [testbed-manager] 2025-04-04 00:36:16.777845 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:36:16.777889 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:36:16.779383 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:36:16.779965 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:36:16.780009 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:36:16.781629 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:36:16.782450 | orchestrator | 2025-04-04 00:36:16.783024 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-04-04 00:36:16.783673 | orchestrator | Friday 04 April 2025 00:36:16 +0000 (0:00:07.016) 0:04:49.947 ********** 2025-04-04 00:36:24.659296 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:36:24.659543 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:36:24.659669 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:36:24.660297 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:36:24.661018 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:36:24.661682 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:36:24.663876 | orchestrator | changed: [testbed-manager] 2025-04-04 00:36:24.664653 | orchestrator | 2025-04-04 00:36:24.665996 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-04-04 00:36:24.666534 | orchestrator | Friday 04 April 2025 00:36:24 +0000 (0:00:07.881) 0:04:57.828 ********** 2025-04-04 00:36:26.236049 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:36:26.240496 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:36:26.240551 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:36:26.243768 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:36:26.243850 | orchestrator | ok: [testbed-manager] 2025-04-04 00:36:26.243870 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:36:26.243889 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:36:26.244834 | orchestrator | 2025-04-04 00:36:26.245619 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-04-04 00:36:26.246702 | orchestrator | Friday 04 April 2025 00:36:26 +0000 (0:00:01.576) 0:04:59.405 ********** 2025-04-04 00:36:32.286523 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:36:32.287381 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:36:32.287424 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:36:32.289012 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:36:32.289912 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:36:32.292842 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:36:32.757384 | orchestrator | changed: [testbed-manager] 2025-04-04 00:36:32.757492 | orchestrator | 2025-04-04 00:36:32.757512 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-04-04 00:36:32.757529 | orchestrator | Friday 04 April 2025 00:36:32 +0000 (0:00:06.049) 0:05:05.455 ********** 2025-04-04 00:36:32.757561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:36:32.761042 | orchestrator | 2025-04-04 00:36:33.558905 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-04-04 00:36:33.559033 | orchestrator | Friday 04 April 2025 00:36:32 +0000 (0:00:00.472) 0:05:05.927 ********** 2025-04-04 00:36:33.559072 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:36:33.559808 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:36:33.562453 | orchestrator | changed: [testbed-manager] 2025-04-04 00:36:33.563688 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:36:33.565178 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:36:33.566579 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:36:33.567513 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:36:33.567609 | orchestrator | 2025-04-04 00:36:33.568591 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-04-04 00:36:33.569263 | orchestrator | Friday 04 April 2025 00:36:33 +0000 (0:00:00.800) 0:05:06.727 ********** 2025-04-04 00:36:35.350208 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:36:35.351839 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:36:35.351962 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:36:35.352305 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:36:35.352904 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:36:35.354352 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:36:35.354537 | orchestrator | ok: [testbed-manager] 2025-04-04 00:36:35.355016 | orchestrator | 2025-04-04 00:36:35.355378 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-04-04 00:36:35.355663 | orchestrator | Friday 04 April 2025 00:36:35 +0000 (0:00:01.790) 0:05:08.518 ********** 2025-04-04 00:36:36.251296 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:36:36.251457 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:36:36.253251 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:36:36.254183 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:36:36.255601 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:36:36.256551 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:36:36.257293 | orchestrator | changed: [testbed-manager] 2025-04-04 00:36:36.261160 | orchestrator | 2025-04-04 00:36:36.261793 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-04-04 00:36:36.262971 | orchestrator | Friday 04 April 2025 00:36:36 +0000 (0:00:00.901) 0:05:09.420 ********** 2025-04-04 00:36:36.322178 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:36:36.360776 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:36:36.406829 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:36:36.450302 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:36:36.490608 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:36:36.571352 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:36:36.572977 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:36:36.574229 | orchestrator | 2025-04-04 00:36:36.575654 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-04-04 00:36:36.576563 | orchestrator | Friday 04 April 2025 00:36:36 +0000 (0:00:00.323) 0:05:09.743 ********** 2025-04-04 00:36:36.678207 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:36:36.716901 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:36:36.774279 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:36:36.813023 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:36:37.039942 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:36:37.041711 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:36:37.041741 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:36:37.042494 | orchestrator | 2025-04-04 00:36:37.043736 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-04-04 00:36:37.044725 | orchestrator | Friday 04 April 2025 00:36:37 +0000 (0:00:00.465) 0:05:10.209 ********** 2025-04-04 00:36:37.181991 | orchestrator | ok: [testbed-manager] 2025-04-04 00:36:37.216549 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:36:37.260390 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:36:37.301216 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:36:37.369763 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:36:37.370437 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:36:37.371945 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:36:37.372197 | orchestrator | 2025-04-04 00:36:37.373597 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-04-04 00:36:37.373835 | orchestrator | Friday 04 April 2025 00:36:37 +0000 (0:00:00.332) 0:05:10.542 ********** 2025-04-04 00:36:37.504211 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:36:37.547086 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:36:37.583609 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:36:37.622287 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:36:37.703507 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:36:37.704508 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:36:37.706426 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:36:37.707226 | orchestrator | 2025-04-04 00:36:37.708463 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-04-04 00:36:37.710777 | orchestrator | Friday 04 April 2025 00:36:37 +0000 (0:00:00.333) 0:05:10.875 ********** 2025-04-04 00:36:37.796852 | orchestrator | ok: [testbed-manager] 2025-04-04 00:36:37.885303 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:36:37.930595 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:36:37.970236 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:36:38.061955 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:36:38.063035 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:36:38.063800 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:36:38.064549 | orchestrator | 2025-04-04 00:36:38.064936 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-04-04 00:36:38.066149 | orchestrator | Friday 04 April 2025 00:36:38 +0000 (0:00:00.359) 0:05:11.234 ********** 2025-04-04 00:36:38.163454 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:36:38.198308 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:36:38.233086 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:36:38.273159 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:36:38.330591 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:36:38.412029 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:36:38.412710 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:36:38.412849 | orchestrator | 2025-04-04 00:36:38.413716 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-04-04 00:36:38.414265 | orchestrator | Friday 04 April 2025 00:36:38 +0000 (0:00:00.347) 0:05:11.582 ********** 2025-04-04 00:36:38.490533 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:36:38.528592 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:36:38.575244 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:36:38.614450 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:36:38.650313 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:36:38.723329 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:36:38.724233 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:36:38.725225 | orchestrator | 2025-04-04 00:36:38.726431 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-04-04 00:36:38.727186 | orchestrator | Friday 04 April 2025 00:36:38 +0000 (0:00:00.314) 0:05:11.897 ********** 2025-04-04 00:36:39.366868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:36:39.367148 | orchestrator | 2025-04-04 00:36:39.368308 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-04-04 00:36:39.368789 | orchestrator | Friday 04 April 2025 00:36:39 +0000 (0:00:00.641) 0:05:12.538 ********** 2025-04-04 00:36:40.226416 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:36:40.226608 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:36:40.227400 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:36:40.228028 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:36:40.231000 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:36:40.231363 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:36:40.232248 | orchestrator | ok: [testbed-manager] 2025-04-04 00:36:40.232777 | orchestrator | 2025-04-04 00:36:40.233452 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-04-04 00:36:40.234233 | orchestrator | Friday 04 April 2025 00:36:40 +0000 (0:00:00.858) 0:05:13.397 ********** 2025-04-04 00:36:43.606814 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:36:43.607797 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:36:43.609433 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:36:43.610876 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:36:43.612619 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:36:43.614103 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:36:43.615507 | orchestrator | ok: [testbed-manager] 2025-04-04 00:36:43.616469 | orchestrator | 2025-04-04 00:36:43.617089 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-04-04 00:36:43.617788 | orchestrator | Friday 04 April 2025 00:36:43 +0000 (0:00:03.381) 0:05:16.778 ********** 2025-04-04 00:36:43.681257 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-04-04 00:36:43.784954 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-04-04 00:36:43.785054 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-04-04 00:36:43.786099 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-04-04 00:36:43.787000 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-04-04 00:36:43.787410 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-04-04 00:36:43.886554 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:36:43.887169 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-04-04 00:36:43.887781 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-04-04 00:36:43.888953 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-04-04 00:36:43.986535 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:36:43.989603 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-04-04 00:36:44.088601 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-04-04 00:36:44.088685 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-04-04 00:36:44.088710 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:36:44.090122 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-04-04 00:36:44.091334 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-04-04 00:36:44.092482 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-04-04 00:36:44.155489 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:36:44.155948 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-04-04 00:36:44.160042 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-04-04 00:36:44.312937 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:36:44.315517 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-04-04 00:36:44.316260 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:36:44.317597 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-04-04 00:36:44.319416 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-04-04 00:36:44.320315 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-04-04 00:36:44.320345 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:36:44.321977 | orchestrator | 2025-04-04 00:36:44.322661 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-04-04 00:36:44.324634 | orchestrator | Friday 04 April 2025 00:36:44 +0000 (0:00:00.704) 0:05:17.483 ********** 2025-04-04 00:36:50.331449 | orchestrator | ok: [testbed-manager] 2025-04-04 00:36:50.332065 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:36:50.332097 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:36:50.332386 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:36:50.333060 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:36:50.334962 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:36:50.335588 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:36:50.336049 | orchestrator | 2025-04-04 00:36:50.336751 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-04-04 00:36:50.336873 | orchestrator | Friday 04 April 2025 00:36:50 +0000 (0:00:06.016) 0:05:23.499 ********** 2025-04-04 00:36:51.439668 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:36:51.440457 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:36:51.441340 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:36:51.445017 | orchestrator | ok: [testbed-manager] 2025-04-04 00:36:51.446087 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:36:51.446957 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:36:51.447457 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:36:51.448481 | orchestrator | 2025-04-04 00:36:51.449096 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-04-04 00:36:51.450352 | orchestrator | Friday 04 April 2025 00:36:51 +0000 (0:00:01.108) 0:05:24.608 ********** 2025-04-04 00:36:57.824396 | orchestrator | ok: [testbed-manager] 2025-04-04 00:36:57.825903 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:36:57.826556 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:36:57.826596 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:36:57.827166 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:36:57.828944 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:36:57.831097 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:36:57.831666 | orchestrator | 2025-04-04 00:36:57.831699 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-04-04 00:36:57.832247 | orchestrator | Friday 04 April 2025 00:36:57 +0000 (0:00:06.383) 0:05:30.991 ********** 2025-04-04 00:37:00.660722 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:37:00.661551 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:37:00.661599 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:37:00.661854 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:37:00.663050 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:37:00.664689 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:37:00.665204 | orchestrator | changed: [testbed-manager] 2025-04-04 00:37:00.666988 | orchestrator | 2025-04-04 00:37:00.667060 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-04-04 00:37:00.667780 | orchestrator | Friday 04 April 2025 00:37:00 +0000 (0:00:02.836) 0:05:33.828 ********** 2025-04-04 00:37:02.067711 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:37:02.067887 | orchestrator | ok: [testbed-manager] 2025-04-04 00:37:02.069253 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:37:02.070620 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:37:02.071173 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:37:02.072247 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:37:02.072532 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:37:02.073191 | orchestrator | 2025-04-04 00:37:02.075998 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-04-04 00:37:02.076775 | orchestrator | Friday 04 April 2025 00:37:02 +0000 (0:00:01.409) 0:05:35.238 ********** 2025-04-04 00:37:03.668368 | orchestrator | ok: [testbed-manager] 2025-04-04 00:37:03.668857 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:37:03.673242 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:37:03.673347 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:37:03.673366 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:37:03.673381 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:37:03.673395 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:37:03.673414 | orchestrator | 2025-04-04 00:37:03.673841 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-04-04 00:37:03.673872 | orchestrator | Friday 04 April 2025 00:37:03 +0000 (0:00:01.601) 0:05:36.839 ********** 2025-04-04 00:37:03.890495 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:37:03.963238 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:37:04.039691 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:37:04.134534 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:37:04.356595 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:37:04.356901 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:37:04.357733 | orchestrator | changed: [testbed-manager] 2025-04-04 00:37:04.358092 | orchestrator | 2025-04-04 00:37:04.358936 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-04-04 00:37:04.359862 | orchestrator | Friday 04 April 2025 00:37:04 +0000 (0:00:00.687) 0:05:37.527 ********** 2025-04-04 00:37:12.464882 | orchestrator | ok: [testbed-manager] 2025-04-04 00:37:12.465364 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:37:12.465402 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:37:12.465427 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:37:12.466165 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:37:12.466490 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:37:12.466635 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:37:12.469568 | orchestrator | 2025-04-04 00:37:12.469600 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-04-04 00:37:12.469689 | orchestrator | Friday 04 April 2025 00:37:12 +0000 (0:00:08.107) 0:05:45.635 ********** 2025-04-04 00:37:13.365058 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:37:13.365311 | orchestrator | changed: [testbed-manager] 2025-04-04 00:37:13.365343 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:37:13.366216 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:37:13.366249 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:37:13.366344 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:37:13.366780 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:37:13.367168 | orchestrator | 2025-04-04 00:37:13.367395 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-04-04 00:37:13.367692 | orchestrator | Friday 04 April 2025 00:37:13 +0000 (0:00:00.901) 0:05:46.536 ********** 2025-04-04 00:37:24.659531 | orchestrator | ok: [testbed-manager] 2025-04-04 00:37:24.660383 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:37:24.660420 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:37:24.660436 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:37:24.660459 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:37:24.661018 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:37:24.661334 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:37:24.661952 | orchestrator | 2025-04-04 00:37:24.663502 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-04-04 00:37:24.663780 | orchestrator | Friday 04 April 2025 00:37:24 +0000 (0:00:11.286) 0:05:57.823 ********** 2025-04-04 00:37:35.053154 | orchestrator | ok: [testbed-manager] 2025-04-04 00:37:35.053453 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:37:35.053482 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:37:35.053504 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:37:35.055433 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:37:35.057349 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:37:35.058100 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:37:35.058402 | orchestrator | 2025-04-04 00:37:35.058436 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-04-04 00:37:35.059604 | orchestrator | Friday 04 April 2025 00:37:35 +0000 (0:00:10.395) 0:06:08.218 ********** 2025-04-04 00:37:35.539853 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-04-04 00:37:35.542071 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-04-04 00:37:36.264370 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-04-04 00:37:36.264703 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-04-04 00:37:36.264736 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-04-04 00:37:36.265547 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-04-04 00:37:36.265975 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-04-04 00:37:36.266384 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-04-04 00:37:36.267180 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-04-04 00:37:36.270462 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-04-04 00:37:36.271523 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-04-04 00:37:36.271819 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-04-04 00:37:36.272439 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-04-04 00:37:36.272951 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-04-04 00:37:36.273423 | orchestrator | 2025-04-04 00:37:36.273774 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-04-04 00:37:36.274402 | orchestrator | Friday 04 April 2025 00:37:36 +0000 (0:00:01.216) 0:06:09.435 ********** 2025-04-04 00:37:36.430497 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:37:36.524471 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:37:36.603426 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:37:36.693913 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:37:36.797578 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:37:36.923682 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:37:36.924510 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:37:36.926082 | orchestrator | 2025-04-04 00:37:36.926693 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-04-04 00:37:36.927385 | orchestrator | Friday 04 April 2025 00:37:36 +0000 (0:00:00.657) 0:06:10.093 ********** 2025-04-04 00:37:40.325500 | orchestrator | ok: [testbed-manager] 2025-04-04 00:37:40.325672 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:37:40.325773 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:37:40.326394 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:37:40.327479 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:37:40.328772 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:37:40.329216 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:37:40.329604 | orchestrator | 2025-04-04 00:37:40.329990 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-04-04 00:37:40.330532 | orchestrator | Friday 04 April 2025 00:37:40 +0000 (0:00:03.400) 0:06:13.494 ********** 2025-04-04 00:37:40.502585 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:37:40.580822 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:37:40.961528 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:37:41.048256 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:37:41.136739 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:37:41.263838 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:37:41.264610 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:37:41.264824 | orchestrator | 2025-04-04 00:37:41.266126 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-04-04 00:37:41.267131 | orchestrator | Friday 04 April 2025 00:37:41 +0000 (0:00:00.939) 0:06:14.434 ********** 2025-04-04 00:37:41.359804 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-04-04 00:37:41.360297 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-04-04 00:37:41.445220 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:37:41.445484 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-04-04 00:37:41.446198 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-04-04 00:37:41.539077 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:37:41.540768 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-04-04 00:37:41.543350 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-04-04 00:37:41.616966 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:37:41.617659 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-04-04 00:37:41.619257 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-04-04 00:37:41.703469 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:37:41.818631 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-04-04 00:37:41.818748 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-04-04 00:37:41.818782 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:37:41.818894 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-04-04 00:37:41.819773 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-04-04 00:37:41.967108 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:37:41.972625 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-04-04 00:37:41.972731 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-04-04 00:37:41.973821 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:37:41.974998 | orchestrator | 2025-04-04 00:37:41.976163 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-04-04 00:37:41.977310 | orchestrator | Friday 04 April 2025 00:37:41 +0000 (0:00:00.702) 0:06:15.136 ********** 2025-04-04 00:37:42.107223 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:37:42.177110 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:37:42.241855 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:37:42.325248 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:37:42.393992 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:37:42.498678 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:37:42.504223 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:37:42.644924 | orchestrator | 2025-04-04 00:37:42.644985 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-04-04 00:37:42.645001 | orchestrator | Friday 04 April 2025 00:37:42 +0000 (0:00:00.532) 0:06:15.668 ********** 2025-04-04 00:37:42.645028 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:37:42.713589 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:37:42.789959 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:37:42.882618 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:37:42.964259 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:37:43.097689 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:37:43.102594 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:37:43.105834 | orchestrator | 2025-04-04 00:37:43.106692 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-04-04 00:37:43.107738 | orchestrator | Friday 04 April 2025 00:37:43 +0000 (0:00:00.599) 0:06:16.268 ********** 2025-04-04 00:37:43.266808 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:37:43.355819 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:37:43.421817 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:37:43.486668 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:37:43.561675 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:37:43.688246 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:37:43.689552 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:37:43.690711 | orchestrator | 2025-04-04 00:37:43.691384 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-04-04 00:37:43.692461 | orchestrator | Friday 04 April 2025 00:37:43 +0000 (0:00:00.590) 0:06:16.858 ********** 2025-04-04 00:37:49.411287 | orchestrator | ok: [testbed-manager] 2025-04-04 00:37:49.411975 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:37:49.412412 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:37:49.413752 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:37:49.414253 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:37:49.414868 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:37:49.415577 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:37:49.416143 | orchestrator | 2025-04-04 00:37:49.416915 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-04-04 00:37:49.417278 | orchestrator | Friday 04 April 2025 00:37:49 +0000 (0:00:05.722) 0:06:22.581 ********** 2025-04-04 00:37:50.440799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:37:50.441696 | orchestrator | 2025-04-04 00:37:50.442744 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-04-04 00:37:50.443479 | orchestrator | Friday 04 April 2025 00:37:50 +0000 (0:00:01.032) 0:06:23.613 ********** 2025-04-04 00:37:51.026397 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:37:51.467879 | orchestrator | ok: [testbed-manager] 2025-04-04 00:37:51.468363 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:37:51.470833 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:37:51.471365 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:37:51.473217 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:37:51.474391 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:37:51.477362 | orchestrator | 2025-04-04 00:37:51.479137 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-04-04 00:37:51.479477 | orchestrator | Friday 04 April 2025 00:37:51 +0000 (0:00:01.023) 0:06:24.637 ********** 2025-04-04 00:37:52.754525 | orchestrator | ok: [testbed-manager] 2025-04-04 00:37:52.754926 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:37:52.756936 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:37:52.758663 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:37:52.759747 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:37:52.760918 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:37:52.761785 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:37:52.763420 | orchestrator | 2025-04-04 00:37:52.764196 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-04-04 00:37:52.765469 | orchestrator | Friday 04 April 2025 00:37:52 +0000 (0:00:01.284) 0:06:25.921 ********** 2025-04-04 00:37:54.466146 | orchestrator | ok: [testbed-manager] 2025-04-04 00:37:54.466931 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:37:54.468123 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:37:54.469341 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:37:54.470108 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:37:54.470835 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:37:54.471766 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:37:54.472328 | orchestrator | 2025-04-04 00:37:54.472795 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-04-04 00:37:54.473505 | orchestrator | Friday 04 April 2025 00:37:54 +0000 (0:00:01.712) 0:06:27.634 ********** 2025-04-04 00:37:54.603392 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:37:55.811554 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:37:55.811729 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:37:55.812523 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:37:55.813892 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:37:55.814232 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:37:55.815489 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:37:55.815915 | orchestrator | 2025-04-04 00:37:55.817234 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-04-04 00:37:55.817572 | orchestrator | Friday 04 April 2025 00:37:55 +0000 (0:00:01.346) 0:06:28.981 ********** 2025-04-04 00:37:57.023869 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:37:57.027394 | orchestrator | ok: [testbed-manager] 2025-04-04 00:37:57.027690 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:37:57.028423 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:37:57.028739 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:37:57.029630 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:37:57.030011 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:37:57.030502 | orchestrator | 2025-04-04 00:37:57.031361 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-04-04 00:37:57.032013 | orchestrator | Friday 04 April 2025 00:37:57 +0000 (0:00:01.212) 0:06:30.194 ********** 2025-04-04 00:37:58.408793 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:37:58.409018 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:37:58.410371 | orchestrator | changed: [testbed-manager] 2025-04-04 00:37:58.411805 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:37:58.422588 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:37:58.422827 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:37:58.424230 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:37:58.424727 | orchestrator | 2025-04-04 00:37:58.425468 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-04-04 00:37:58.426128 | orchestrator | Friday 04 April 2025 00:37:58 +0000 (0:00:01.382) 0:06:31.576 ********** 2025-04-04 00:37:59.609348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:37:59.612817 | orchestrator | 2025-04-04 00:37:59.613325 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-04-04 00:37:59.613355 | orchestrator | Friday 04 April 2025 00:37:59 +0000 (0:00:01.201) 0:06:32.778 ********** 2025-04-04 00:38:00.928534 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:00.930125 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:00.930880 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:00.932663 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:00.932981 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:00.935148 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:00.935568 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:00.939359 | orchestrator | 2025-04-04 00:38:00.939672 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-04-04 00:38:00.942977 | orchestrator | Friday 04 April 2025 00:38:00 +0000 (0:00:01.316) 0:06:34.095 ********** 2025-04-04 00:38:02.064933 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:02.066498 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:02.068309 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:02.068883 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:02.069697 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:02.071527 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:02.075669 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:02.076676 | orchestrator | 2025-04-04 00:38:02.077842 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-04-04 00:38:02.079371 | orchestrator | Friday 04 April 2025 00:38:02 +0000 (0:00:01.140) 0:06:35.235 ********** 2025-04-04 00:38:03.241994 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:03.243342 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:03.243386 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:03.244650 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:03.245943 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:03.249385 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:03.250153 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:03.250200 | orchestrator | 2025-04-04 00:38:03.250821 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-04-04 00:38:03.251400 | orchestrator | Friday 04 April 2025 00:38:03 +0000 (0:00:01.173) 0:06:36.408 ********** 2025-04-04 00:38:04.028056 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:04.836527 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:04.836763 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:04.838431 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:04.839971 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:04.840313 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:04.841697 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:04.843520 | orchestrator | 2025-04-04 00:38:04.843742 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-04-04 00:38:04.845452 | orchestrator | Friday 04 April 2025 00:38:04 +0000 (0:00:01.597) 0:06:38.006 ********** 2025-04-04 00:38:06.151414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:38:06.152268 | orchestrator | 2025-04-04 00:38:06.152516 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-04 00:38:06.153714 | orchestrator | Friday 04 April 2025 00:38:05 +0000 (0:00:01.002) 0:06:39.009 ********** 2025-04-04 00:38:06.155758 | orchestrator | 2025-04-04 00:38:06.155812 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-04 00:38:06.155991 | orchestrator | Friday 04 April 2025 00:38:05 +0000 (0:00:00.039) 0:06:39.049 ********** 2025-04-04 00:38:06.156307 | orchestrator | 2025-04-04 00:38:06.158085 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-04 00:38:06.158521 | orchestrator | Friday 04 April 2025 00:38:05 +0000 (0:00:00.043) 0:06:39.092 ********** 2025-04-04 00:38:06.160444 | orchestrator | 2025-04-04 00:38:06.160746 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-04 00:38:06.161120 | orchestrator | Friday 04 April 2025 00:38:05 +0000 (0:00:00.055) 0:06:39.148 ********** 2025-04-04 00:38:06.162072 | orchestrator | 2025-04-04 00:38:06.162292 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-04 00:38:06.163223 | orchestrator | Friday 04 April 2025 00:38:06 +0000 (0:00:00.041) 0:06:39.190 ********** 2025-04-04 00:38:06.163784 | orchestrator | 2025-04-04 00:38:06.164287 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-04 00:38:06.165824 | orchestrator | Friday 04 April 2025 00:38:06 +0000 (0:00:00.040) 0:06:39.230 ********** 2025-04-04 00:38:06.166519 | orchestrator | 2025-04-04 00:38:06.166845 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-04 00:38:06.168054 | orchestrator | Friday 04 April 2025 00:38:06 +0000 (0:00:00.049) 0:06:39.279 ********** 2025-04-04 00:38:06.168293 | orchestrator | 2025-04-04 00:38:06.169326 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-04 00:38:06.169606 | orchestrator | Friday 04 April 2025 00:38:06 +0000 (0:00:00.040) 0:06:39.320 ********** 2025-04-04 00:38:07.321002 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:07.321784 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:07.326789 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:07.327725 | orchestrator | 2025-04-04 00:38:07.328692 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-04-04 00:38:07.329953 | orchestrator | Friday 04 April 2025 00:38:07 +0000 (0:00:01.166) 0:06:40.487 ********** 2025-04-04 00:38:08.973516 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:38:08.973749 | orchestrator | changed: [testbed-manager] 2025-04-04 00:38:08.975826 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:38:08.977377 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:38:08.977582 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:38:08.978965 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:38:08.979580 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:38:08.980262 | orchestrator | 2025-04-04 00:38:08.981099 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-04-04 00:38:08.981764 | orchestrator | Friday 04 April 2025 00:38:08 +0000 (0:00:01.653) 0:06:42.140 ********** 2025-04-04 00:38:10.147839 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:38:10.150444 | orchestrator | changed: [testbed-manager] 2025-04-04 00:38:10.150663 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:38:10.151057 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:38:10.152019 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:38:10.152753 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:38:10.153291 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:38:10.153936 | orchestrator | 2025-04-04 00:38:10.154143 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-04-04 00:38:10.154516 | orchestrator | Friday 04 April 2025 00:38:10 +0000 (0:00:01.179) 0:06:43.319 ********** 2025-04-04 00:38:10.303947 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:38:11.982118 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:38:11.982486 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:38:11.982788 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:38:11.982851 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:38:11.983590 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:38:11.983830 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:38:11.985244 | orchestrator | 2025-04-04 00:38:11.985458 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-04-04 00:38:11.987559 | orchestrator | Friday 04 April 2025 00:38:11 +0000 (0:00:01.830) 0:06:45.150 ********** 2025-04-04 00:38:12.094514 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:38:12.095443 | orchestrator | 2025-04-04 00:38:12.097717 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-04-04 00:38:12.098817 | orchestrator | Friday 04 April 2025 00:38:12 +0000 (0:00:00.112) 0:06:45.263 ********** 2025-04-04 00:38:13.126483 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:38:13.126657 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:38:13.126775 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:13.127258 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:38:13.130750 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:38:13.131522 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:38:13.131992 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:38:13.132493 | orchestrator | 2025-04-04 00:38:13.133062 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-04-04 00:38:13.133640 | orchestrator | Friday 04 April 2025 00:38:13 +0000 (0:00:01.031) 0:06:46.294 ********** 2025-04-04 00:38:13.288225 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:38:13.359274 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:38:13.433930 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:38:13.514835 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:38:13.803465 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:38:13.927331 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:38:13.927869 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:38:13.928550 | orchestrator | 2025-04-04 00:38:13.928910 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-04-04 00:38:13.929554 | orchestrator | Friday 04 April 2025 00:38:13 +0000 (0:00:00.804) 0:06:47.099 ********** 2025-04-04 00:38:14.908296 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:38:14.911487 | orchestrator | 2025-04-04 00:38:14.911599 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-04-04 00:38:15.786411 | orchestrator | Friday 04 April 2025 00:38:14 +0000 (0:00:00.977) 0:06:48.077 ********** 2025-04-04 00:38:15.786503 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:15.787088 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:15.787320 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:15.790644 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:18.415312 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:18.415437 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:18.415454 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:18.415468 | orchestrator | 2025-04-04 00:38:18.415483 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-04-04 00:38:18.415497 | orchestrator | Friday 04 April 2025 00:38:15 +0000 (0:00:00.877) 0:06:48.954 ********** 2025-04-04 00:38:18.415527 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-04-04 00:38:18.417658 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-04-04 00:38:18.420015 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-04-04 00:38:18.422579 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-04-04 00:38:18.423266 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-04-04 00:38:18.423294 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-04-04 00:38:18.424089 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-04-04 00:38:18.425109 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-04-04 00:38:18.425330 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-04-04 00:38:18.425788 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-04-04 00:38:18.426547 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-04-04 00:38:18.426971 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-04-04 00:38:18.427942 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-04-04 00:38:18.428730 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-04-04 00:38:18.428755 | orchestrator | 2025-04-04 00:38:18.428969 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-04-04 00:38:18.429502 | orchestrator | Friday 04 April 2025 00:38:18 +0000 (0:00:02.629) 0:06:51.584 ********** 2025-04-04 00:38:18.561795 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:38:18.640330 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:38:18.706389 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:38:18.797949 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:38:18.867126 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:38:18.979511 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:38:18.980723 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:38:18.981905 | orchestrator | 2025-04-04 00:38:18.982788 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-04-04 00:38:18.986195 | orchestrator | Friday 04 April 2025 00:38:18 +0000 (0:00:00.565) 0:06:52.149 ********** 2025-04-04 00:38:19.899270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:38:19.899446 | orchestrator | 2025-04-04 00:38:19.900237 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-04-04 00:38:19.900665 | orchestrator | Friday 04 April 2025 00:38:19 +0000 (0:00:00.920) 0:06:53.070 ********** 2025-04-04 00:38:20.371618 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:20.780042 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:20.780195 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:20.780365 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:20.780741 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:20.782157 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:20.783040 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:20.783083 | orchestrator | 2025-04-04 00:38:20.783119 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-04-04 00:38:20.783473 | orchestrator | Friday 04 April 2025 00:38:20 +0000 (0:00:00.878) 0:06:53.948 ********** 2025-04-04 00:38:21.261747 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:21.520880 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:21.881319 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:21.884543 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:21.885430 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:21.886646 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:21.887170 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:21.888361 | orchestrator | 2025-04-04 00:38:21.889094 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-04-04 00:38:21.891707 | orchestrator | Friday 04 April 2025 00:38:21 +0000 (0:00:01.102) 0:06:55.051 ********** 2025-04-04 00:38:22.067666 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:38:22.146164 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:38:22.214430 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:38:22.289275 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:38:22.369402 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:38:22.470692 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:38:22.477712 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:38:22.478451 | orchestrator | 2025-04-04 00:38:22.478482 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-04-04 00:38:22.479732 | orchestrator | Friday 04 April 2025 00:38:22 +0000 (0:00:00.587) 0:06:55.638 ********** 2025-04-04 00:38:23.928747 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:23.928929 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:23.929518 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:23.930642 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:23.931398 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:23.932290 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:23.932720 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:23.933745 | orchestrator | 2025-04-04 00:38:23.934376 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-04-04 00:38:23.935272 | orchestrator | Friday 04 April 2025 00:38:23 +0000 (0:00:01.457) 0:06:57.095 ********** 2025-04-04 00:38:24.066670 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:38:24.130511 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:38:24.206578 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:38:24.294637 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:38:24.366552 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:38:24.466966 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:38:24.469402 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:38:24.470474 | orchestrator | 2025-04-04 00:38:24.470525 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-04-04 00:38:24.471531 | orchestrator | Friday 04 April 2025 00:38:24 +0000 (0:00:00.540) 0:06:57.636 ********** 2025-04-04 00:38:26.491167 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:26.491754 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:26.492538 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:26.493404 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:26.496450 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:26.497669 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:26.497693 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:26.497708 | orchestrator | 2025-04-04 00:38:26.497730 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-04-04 00:38:26.498734 | orchestrator | Friday 04 April 2025 00:38:26 +0000 (0:00:02.022) 0:06:59.659 ********** 2025-04-04 00:38:27.783465 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:27.789029 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:38:27.789448 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:38:27.789476 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:38:27.789491 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:38:27.789506 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:38:27.789525 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:38:27.789761 | orchestrator | 2025-04-04 00:38:27.790736 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-04-04 00:38:27.790993 | orchestrator | Friday 04 April 2025 00:38:27 +0000 (0:00:01.291) 0:07:00.950 ********** 2025-04-04 00:38:29.473081 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:29.473985 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:38:29.474079 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:38:29.476670 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:38:29.478486 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:38:29.478522 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:38:29.478538 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:38:29.478552 | orchestrator | 2025-04-04 00:38:29.478568 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-04-04 00:38:29.478590 | orchestrator | Friday 04 April 2025 00:38:29 +0000 (0:00:01.688) 0:07:02.638 ********** 2025-04-04 00:38:31.111627 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:38:31.115939 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:31.117056 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:38:31.117099 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:38:31.117114 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:38:31.117129 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:38:31.117143 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:38:31.117165 | orchestrator | 2025-04-04 00:38:31.117450 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-04 00:38:31.117768 | orchestrator | Friday 04 April 2025 00:38:31 +0000 (0:00:01.640) 0:07:04.279 ********** 2025-04-04 00:38:31.802646 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:32.207731 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:32.208630 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:32.209383 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:32.209840 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:32.210548 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:32.211061 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:32.211357 | orchestrator | 2025-04-04 00:38:32.211970 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-04 00:38:32.212789 | orchestrator | Friday 04 April 2025 00:38:32 +0000 (0:00:01.098) 0:07:05.377 ********** 2025-04-04 00:38:32.354699 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:38:32.437422 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:38:32.519760 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:38:32.581048 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:38:32.670856 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:38:33.148599 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:38:33.148839 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:38:33.149634 | orchestrator | 2025-04-04 00:38:33.150370 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-04-04 00:38:33.150849 | orchestrator | Friday 04 April 2025 00:38:33 +0000 (0:00:00.943) 0:07:06.321 ********** 2025-04-04 00:38:33.294939 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:38:33.367577 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:38:33.441011 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:38:33.526998 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:38:33.605649 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:38:33.705443 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:38:33.705626 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:38:33.707484 | orchestrator | 2025-04-04 00:38:33.708429 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-04-04 00:38:33.709516 | orchestrator | Friday 04 April 2025 00:38:33 +0000 (0:00:00.554) 0:07:06.875 ********** 2025-04-04 00:38:33.854104 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:33.920922 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:33.997043 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:34.062374 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:34.127915 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:34.259741 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:34.260623 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:34.261524 | orchestrator | 2025-04-04 00:38:34.262716 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-04-04 00:38:34.263930 | orchestrator | Friday 04 April 2025 00:38:34 +0000 (0:00:00.555) 0:07:07.430 ********** 2025-04-04 00:38:34.409952 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:34.690312 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:34.758064 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:34.819935 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:34.894913 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:35.018127 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:35.018929 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:35.020633 | orchestrator | 2025-04-04 00:38:35.021361 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-04-04 00:38:35.022311 | orchestrator | Friday 04 April 2025 00:38:35 +0000 (0:00:00.757) 0:07:08.187 ********** 2025-04-04 00:38:35.165125 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:35.233916 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:35.294557 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:35.385520 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:35.451560 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:35.556288 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:35.557398 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:35.558818 | orchestrator | 2025-04-04 00:38:35.560336 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-04-04 00:38:35.561874 | orchestrator | Friday 04 April 2025 00:38:35 +0000 (0:00:00.538) 0:07:08.726 ********** 2025-04-04 00:38:40.123835 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:40.124080 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:40.125066 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:40.126115 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:40.127158 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:40.131279 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:40.131412 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:40.131499 | orchestrator | 2025-04-04 00:38:40.131800 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-04-04 00:38:40.132388 | orchestrator | Friday 04 April 2025 00:38:40 +0000 (0:00:04.569) 0:07:13.295 ********** 2025-04-04 00:38:40.256205 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:38:40.405485 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:38:40.477255 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:38:40.562001 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:38:40.689925 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:38:40.691473 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:38:40.692072 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:38:40.693212 | orchestrator | 2025-04-04 00:38:40.694414 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-04-04 00:38:40.694703 | orchestrator | Friday 04 April 2025 00:38:40 +0000 (0:00:00.564) 0:07:13.860 ********** 2025-04-04 00:38:41.820204 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:38:41.820756 | orchestrator | 2025-04-04 00:38:41.822097 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-04-04 00:38:41.823409 | orchestrator | Friday 04 April 2025 00:38:41 +0000 (0:00:01.129) 0:07:14.990 ********** 2025-04-04 00:38:43.674353 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:43.674533 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:43.675569 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:43.676548 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:43.676828 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:43.680054 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:43.681205 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:43.682125 | orchestrator | 2025-04-04 00:38:43.682313 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-04-04 00:38:43.682611 | orchestrator | Friday 04 April 2025 00:38:43 +0000 (0:00:01.853) 0:07:16.843 ********** 2025-04-04 00:38:44.713196 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:44.715278 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:44.715324 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:44.715542 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:44.719106 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:44.719516 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:44.720044 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:44.720692 | orchestrator | 2025-04-04 00:38:44.721619 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-04-04 00:38:45.178137 | orchestrator | Friday 04 April 2025 00:38:44 +0000 (0:00:01.038) 0:07:17.881 ********** 2025-04-04 00:38:45.178320 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:45.535178 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:45.536780 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:45.536851 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:45.537483 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:45.537511 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:45.537534 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:38:45.539113 | orchestrator | 2025-04-04 00:38:45.540043 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-04-04 00:38:45.540613 | orchestrator | Friday 04 April 2025 00:38:45 +0000 (0:00:00.824) 0:07:18.706 ********** 2025-04-04 00:38:47.400337 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-04 00:38:47.401051 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-04 00:38:47.402391 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-04 00:38:47.405756 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-04 00:38:47.406277 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-04 00:38:47.407274 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-04 00:38:47.408165 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-04 00:38:47.409132 | orchestrator | 2025-04-04 00:38:47.410347 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-04-04 00:38:47.411418 | orchestrator | Friday 04 April 2025 00:38:47 +0000 (0:00:01.862) 0:07:20.568 ********** 2025-04-04 00:38:48.287366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:38:48.287522 | orchestrator | 2025-04-04 00:38:48.288178 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-04-04 00:38:48.288638 | orchestrator | Friday 04 April 2025 00:38:48 +0000 (0:00:00.890) 0:07:21.459 ********** 2025-04-04 00:38:57.686653 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:38:57.686909 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:38:57.687425 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:38:57.687474 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:38:57.692751 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:38:57.693258 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:38:57.694072 | orchestrator | changed: [testbed-manager] 2025-04-04 00:38:57.694190 | orchestrator | 2025-04-04 00:38:57.694648 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-04-04 00:38:57.695106 | orchestrator | Friday 04 April 2025 00:38:57 +0000 (0:00:09.395) 0:07:30.854 ********** 2025-04-04 00:38:59.632576 | orchestrator | ok: [testbed-manager] 2025-04-04 00:38:59.632740 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:38:59.634583 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:38:59.636555 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:38:59.637441 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:38:59.637474 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:38:59.648614 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:39:00.893034 | orchestrator | 2025-04-04 00:39:00.893157 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-04-04 00:39:00.893177 | orchestrator | Friday 04 April 2025 00:38:59 +0000 (0:00:01.948) 0:07:32.802 ********** 2025-04-04 00:39:00.893209 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:39:00.893363 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:39:00.893983 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:39:00.897599 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:39:00.898727 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:39:00.899870 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:39:00.900878 | orchestrator | 2025-04-04 00:39:00.902113 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-04-04 00:39:00.902342 | orchestrator | Friday 04 April 2025 00:39:00 +0000 (0:00:01.260) 0:07:34.063 ********** 2025-04-04 00:39:02.208289 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:39:02.209397 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:39:02.210469 | orchestrator | changed: [testbed-manager] 2025-04-04 00:39:02.211580 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:39:02.213546 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:39:02.214338 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:39:02.215303 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:39:02.217973 | orchestrator | 2025-04-04 00:39:02.218962 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-04-04 00:39:02.220137 | orchestrator | 2025-04-04 00:39:02.221615 | orchestrator | TASK [Include hardening role] ************************************************** 2025-04-04 00:39:02.222650 | orchestrator | Friday 04 April 2025 00:39:02 +0000 (0:00:01.314) 0:07:35.377 ********** 2025-04-04 00:39:02.355966 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:39:02.421028 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:39:02.486540 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:39:02.569568 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:39:02.632578 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:39:02.787733 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:39:02.788612 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:39:02.788658 | orchestrator | 2025-04-04 00:39:02.789466 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-04-04 00:39:02.790607 | orchestrator | 2025-04-04 00:39:02.791217 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-04-04 00:39:02.792099 | orchestrator | Friday 04 April 2025 00:39:02 +0000 (0:00:00.581) 0:07:35.959 ********** 2025-04-04 00:39:04.120424 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:39:04.120725 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:39:04.121682 | orchestrator | changed: [testbed-manager] 2025-04-04 00:39:04.121863 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:39:04.122925 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:39:04.123316 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:39:04.124167 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:39:04.124348 | orchestrator | 2025-04-04 00:39:04.125592 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-04-04 00:39:04.125990 | orchestrator | Friday 04 April 2025 00:39:04 +0000 (0:00:01.332) 0:07:37.291 ********** 2025-04-04 00:39:05.534807 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:39:05.535384 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:39:05.536609 | orchestrator | ok: [testbed-manager] 2025-04-04 00:39:05.537777 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:39:05.538713 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:39:05.539640 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:39:05.540417 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:39:05.541549 | orchestrator | 2025-04-04 00:39:05.542810 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-04-04 00:39:05.543099 | orchestrator | Friday 04 April 2025 00:39:05 +0000 (0:00:01.410) 0:07:38.702 ********** 2025-04-04 00:39:05.672995 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:39:05.749547 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:39:06.065835 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:39:06.138172 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:39:06.217934 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:39:06.650486 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:39:06.651486 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:39:06.652743 | orchestrator | 2025-04-04 00:39:06.653222 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-04-04 00:39:06.654814 | orchestrator | Friday 04 April 2025 00:39:06 +0000 (0:00:01.119) 0:07:39.821 ********** 2025-04-04 00:39:07.793313 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:39:07.794189 | orchestrator | changed: [testbed-manager] 2025-04-04 00:39:07.795159 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:39:07.797784 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:39:07.797900 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:39:07.799252 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:39:07.800597 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:39:07.801558 | orchestrator | 2025-04-04 00:39:07.802316 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-04-04 00:39:07.803058 | orchestrator | 2025-04-04 00:39:07.803943 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-04-04 00:39:07.804780 | orchestrator | Friday 04 April 2025 00:39:07 +0000 (0:00:01.142) 0:07:40.964 ********** 2025-04-04 00:39:08.715742 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:39:08.716652 | orchestrator | 2025-04-04 00:39:08.716689 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-04 00:39:08.717804 | orchestrator | Friday 04 April 2025 00:39:08 +0000 (0:00:00.922) 0:07:41.887 ********** 2025-04-04 00:39:09.142174 | orchestrator | ok: [testbed-manager] 2025-04-04 00:39:09.218925 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:39:09.766555 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:39:09.767569 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:39:09.769344 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:39:09.769505 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:39:09.770087 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:39:09.770749 | orchestrator | 2025-04-04 00:39:09.771423 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-04 00:39:10.996001 | orchestrator | Friday 04 April 2025 00:39:09 +0000 (0:00:01.048) 0:07:42.936 ********** 2025-04-04 00:39:10.996118 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:39:10.996216 | orchestrator | changed: [testbed-manager] 2025-04-04 00:39:10.998080 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:39:10.998852 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:39:10.999137 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:39:10.999703 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:39:11.000638 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:39:11.001456 | orchestrator | 2025-04-04 00:39:11.002375 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-04-04 00:39:11.002685 | orchestrator | Friday 04 April 2025 00:39:10 +0000 (0:00:01.232) 0:07:44.168 ********** 2025-04-04 00:39:12.052126 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 00:39:12.052770 | orchestrator | 2025-04-04 00:39:12.052814 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-04 00:39:12.053343 | orchestrator | Friday 04 April 2025 00:39:12 +0000 (0:00:01.054) 0:07:45.223 ********** 2025-04-04 00:39:12.485847 | orchestrator | ok: [testbed-manager] 2025-04-04 00:39:12.938131 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:39:12.939638 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:39:12.940748 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:39:12.940824 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:39:12.942613 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:39:12.943464 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:39:12.943939 | orchestrator | 2025-04-04 00:39:12.944492 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-04 00:39:12.945088 | orchestrator | Friday 04 April 2025 00:39:12 +0000 (0:00:00.885) 0:07:46.109 ********** 2025-04-04 00:39:14.018146 | orchestrator | changed: [testbed-manager] 2025-04-04 00:39:14.018366 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:39:14.018888 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:39:14.019554 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:39:14.020560 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:39:14.021402 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:39:14.022466 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:39:14.024106 | orchestrator | 2025-04-04 00:39:14.025778 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:39:14.026337 | orchestrator | 2025-04-04 00:39:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:39:14.026411 | orchestrator | 2025-04-04 00:39:14 | INFO  | Please wait and do not abort execution. 2025-04-04 00:39:14.026434 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-04-04 00:39:14.026728 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-04 00:39:14.027085 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-04 00:39:14.027614 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-04 00:39:14.027843 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-04-04 00:39:14.028191 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-04 00:39:14.028616 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-04 00:39:14.029003 | orchestrator | 2025-04-04 00:39:14.029443 | orchestrator | Friday 04 April 2025 00:39:14 +0000 (0:00:01.080) 0:07:47.189 ********** 2025-04-04 00:39:14.029766 | orchestrator | =============================================================================== 2025-04-04 00:39:14.030172 | orchestrator | osism.commons.packages : Install required packages --------------------- 61.10s 2025-04-04 00:39:14.030494 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.15s 2025-04-04 00:39:14.030814 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.06s 2025-04-04 00:39:14.031160 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 14.79s 2025-04-04 00:39:14.031426 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 14.08s 2025-04-04 00:39:14.031885 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.91s 2025-04-04 00:39:14.032208 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 11.29s 2025-04-04 00:39:14.032453 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.40s 2025-04-04 00:39:14.032894 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.74s 2025-04-04 00:39:14.033148 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.40s 2025-04-04 00:39:14.033465 | orchestrator | osism.services.docker : Install containerd package ---------------------- 8.11s 2025-04-04 00:39:14.033801 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.88s 2025-04-04 00:39:14.034131 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 7.70s 2025-04-04 00:39:14.034418 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.31s 2025-04-04 00:39:14.034448 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.02s 2025-04-04 00:39:14.034835 | orchestrator | osism.services.docker : Add repository ---------------------------------- 6.38s 2025-04-04 00:39:14.035258 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.05s 2025-04-04 00:39:14.035688 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.02s 2025-04-04 00:39:14.036225 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 5.72s 2025-04-04 00:39:14.839753 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.43s 2025-04-04 00:39:14.839882 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-04-04 00:39:17.217901 | orchestrator | + osism apply network 2025-04-04 00:39:17.218095 | orchestrator | 2025-04-04 00:39:17 | INFO  | Task fe1629ff-5cfc-4c13-a919-1272a4161c88 (network) was prepared for execution. 2025-04-04 00:39:20.956425 | orchestrator | 2025-04-04 00:39:17 | INFO  | It takes a moment until task fe1629ff-5cfc-4c13-a919-1272a4161c88 (network) has been started and output is visible here. 2025-04-04 00:39:20.956527 | orchestrator | 2025-04-04 00:39:20.957290 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-04-04 00:39:20.957305 | orchestrator | 2025-04-04 00:39:20.957907 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-04-04 00:39:20.958617 | orchestrator | Friday 04 April 2025 00:39:20 +0000 (0:00:00.267) 0:00:00.267 ********** 2025-04-04 00:39:21.138780 | orchestrator | ok: [testbed-manager] 2025-04-04 00:39:21.230351 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:39:21.309159 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:39:21.393021 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:39:21.475307 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:39:21.748936 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:39:21.750005 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:39:21.750094 | orchestrator | 2025-04-04 00:39:21.751126 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-04-04 00:39:21.751768 | orchestrator | Friday 04 April 2025 00:39:21 +0000 (0:00:00.792) 0:00:01.059 ********** 2025-04-04 00:39:23.199928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-04 00:39:23.201009 | orchestrator | 2025-04-04 00:39:23.201699 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-04-04 00:39:23.203000 | orchestrator | Friday 04 April 2025 00:39:23 +0000 (0:00:01.449) 0:00:02.508 ********** 2025-04-04 00:39:25.205849 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:39:25.207899 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:39:25.207942 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:39:25.210092 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:39:25.210982 | orchestrator | ok: [testbed-manager] 2025-04-04 00:39:25.212299 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:39:25.213397 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:39:25.213816 | orchestrator | 2025-04-04 00:39:25.214521 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-04-04 00:39:25.215060 | orchestrator | Friday 04 April 2025 00:39:25 +0000 (0:00:02.004) 0:00:04.512 ********** 2025-04-04 00:39:27.074566 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:39:27.077113 | orchestrator | ok: [testbed-manager] 2025-04-04 00:39:27.077188 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:39:27.078921 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:39:27.080580 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:39:27.082365 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:39:27.083553 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:39:27.084570 | orchestrator | 2025-04-04 00:39:27.085558 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-04-04 00:39:27.086432 | orchestrator | Friday 04 April 2025 00:39:27 +0000 (0:00:01.863) 0:00:06.376 ********** 2025-04-04 00:39:27.731643 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-04-04 00:39:27.731811 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-04-04 00:39:27.732305 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-04-04 00:39:27.732401 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-04-04 00:39:28.402816 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-04-04 00:39:28.402984 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-04-04 00:39:28.403005 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-04-04 00:39:28.403685 | orchestrator | 2025-04-04 00:39:28.404576 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-04-04 00:39:28.405404 | orchestrator | Friday 04 April 2025 00:39:28 +0000 (0:00:01.335) 0:00:07.712 ********** 2025-04-04 00:39:30.366799 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-04 00:39:30.367466 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-04 00:39:30.368581 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-04 00:39:30.369052 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-04 00:39:30.369784 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-04 00:39:30.370739 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-04 00:39:30.371123 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-04 00:39:30.371787 | orchestrator | 2025-04-04 00:39:30.373367 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-04-04 00:39:30.374713 | orchestrator | Friday 04 April 2025 00:39:30 +0000 (0:00:01.966) 0:00:09.678 ********** 2025-04-04 00:39:32.067221 | orchestrator | changed: [testbed-manager] 2025-04-04 00:39:32.068391 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:39:32.069041 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:39:32.069076 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:39:32.070103 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:39:32.070768 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:39:32.071518 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:39:32.071882 | orchestrator | 2025-04-04 00:39:32.072541 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-04-04 00:39:32.073457 | orchestrator | Friday 04 April 2025 00:39:32 +0000 (0:00:01.696) 0:00:11.374 ********** 2025-04-04 00:39:32.575866 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-04 00:39:32.660312 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-04 00:39:33.151019 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-04 00:39:33.152391 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-04 00:39:33.153682 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-04 00:39:33.154586 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-04 00:39:33.155540 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-04 00:39:33.156911 | orchestrator | 2025-04-04 00:39:33.158099 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-04-04 00:39:33.159108 | orchestrator | Friday 04 April 2025 00:39:33 +0000 (0:00:01.089) 0:00:12.464 ********** 2025-04-04 00:39:33.818624 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:39:33.954661 | orchestrator | ok: [testbed-manager] 2025-04-04 00:39:34.140536 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:39:34.567293 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:39:34.567711 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:39:34.567996 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:39:34.568732 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:39:34.569071 | orchestrator | 2025-04-04 00:39:34.569705 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-04-04 00:39:34.571182 | orchestrator | Friday 04 April 2025 00:39:34 +0000 (0:00:01.411) 0:00:13.875 ********** 2025-04-04 00:39:34.732784 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:39:34.818304 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:39:34.906082 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:39:34.990241 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:39:35.089734 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:39:35.415319 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:39:35.416462 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:39:35.416508 | orchestrator | 2025-04-04 00:39:35.417057 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-04-04 00:39:35.417829 | orchestrator | Friday 04 April 2025 00:39:35 +0000 (0:00:00.847) 0:00:14.722 ********** 2025-04-04 00:39:37.349227 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:39:37.352178 | orchestrator | ok: [testbed-manager] 2025-04-04 00:39:37.353102 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:39:37.356124 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:39:37.357138 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:39:37.357551 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:39:37.358211 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:39:37.358763 | orchestrator | 2025-04-04 00:39:37.360926 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-04-04 00:39:37.361584 | orchestrator | Friday 04 April 2025 00:39:37 +0000 (0:00:01.935) 0:00:16.657 ********** 2025-04-04 00:39:39.256554 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-04-04 00:39:39.257827 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-04 00:39:39.257882 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-04 00:39:39.259019 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-04 00:39:39.260023 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-04 00:39:39.261318 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-04 00:39:39.261630 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-04 00:39:39.262443 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-04 00:39:39.263142 | orchestrator | 2025-04-04 00:39:39.264008 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-04-04 00:39:39.264787 | orchestrator | Friday 04 April 2025 00:39:39 +0000 (0:00:01.905) 0:00:18.563 ********** 2025-04-04 00:39:40.696367 | orchestrator | ok: [testbed-manager] 2025-04-04 00:39:40.697204 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:39:40.700674 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:39:40.703341 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:39:40.703373 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:39:40.703389 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:39:40.703404 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:39:40.703424 | orchestrator | 2025-04-04 00:39:40.704436 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-04-04 00:39:40.705556 | orchestrator | Friday 04 April 2025 00:39:40 +0000 (0:00:01.443) 0:00:20.006 ********** 2025-04-04 00:39:42.299461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-04 00:39:42.301758 | orchestrator | 2025-04-04 00:39:43.834136 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-04-04 00:39:43.834312 | orchestrator | Friday 04 April 2025 00:39:42 +0000 (0:00:01.599) 0:00:21.605 ********** 2025-04-04 00:39:43.834355 | orchestrator | ok: [testbed-manager] 2025-04-04 00:39:43.834585 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:39:43.836095 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:39:43.837341 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:39:43.838293 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:39:43.838969 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:39:43.841286 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:39:43.842383 | orchestrator | 2025-04-04 00:39:43.843662 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-04-04 00:39:44.039954 | orchestrator | Friday 04 April 2025 00:39:43 +0000 (0:00:01.538) 0:00:23.144 ********** 2025-04-04 00:39:44.040016 | orchestrator | ok: [testbed-manager] 2025-04-04 00:39:44.130801 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:39:44.398696 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:39:44.503614 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:39:44.593704 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:39:44.766973 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:39:44.768257 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:39:44.769381 | orchestrator | 2025-04-04 00:39:44.770553 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-04-04 00:39:44.771526 | orchestrator | Friday 04 April 2025 00:39:44 +0000 (0:00:00.931) 0:00:24.075 ********** 2025-04-04 00:39:45.209212 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-04 00:39:45.210130 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-04-04 00:39:45.211139 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-04 00:39:45.214337 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-04-04 00:39:45.318505 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-04 00:39:45.318686 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-04-04 00:39:45.814206 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-04 00:39:45.815128 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-04-04 00:39:45.815466 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-04 00:39:45.815480 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-04-04 00:39:45.816188 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-04 00:39:45.817208 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-04-04 00:39:45.817587 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-04 00:39:45.818051 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-04-04 00:39:45.818513 | orchestrator | 2025-04-04 00:39:45.819198 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-04-04 00:39:45.819617 | orchestrator | Friday 04 April 2025 00:39:45 +0000 (0:00:01.050) 0:00:25.126 ********** 2025-04-04 00:39:46.235374 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:39:46.324804 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:39:46.410420 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:39:46.490449 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:39:46.591580 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:39:47.895690 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:39:47.896411 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:39:47.900381 | orchestrator | 2025-04-04 00:39:48.084784 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-04-04 00:39:48.084888 | orchestrator | Friday 04 April 2025 00:39:47 +0000 (0:00:02.077) 0:00:27.204 ********** 2025-04-04 00:39:48.084920 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:39:48.195034 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:39:48.502597 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:39:48.592166 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:39:48.688829 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:39:48.726399 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:39:48.728357 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:39:48.729619 | orchestrator | 2025-04-04 00:39:48.730499 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:39:48.730841 | orchestrator | 2025-04-04 00:39:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:39:48.731156 | orchestrator | 2025-04-04 00:39:48 | INFO  | Please wait and do not abort execution. 2025-04-04 00:39:48.732456 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-04 00:39:48.734162 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-04 00:39:48.735732 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-04 00:39:48.737071 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-04 00:39:48.738377 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-04 00:39:48.738931 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-04 00:39:48.739684 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-04 00:39:48.743468 | orchestrator | 2025-04-04 00:39:48.743938 | orchestrator | Friday 04 April 2025 00:39:48 +0000 (0:00:00.834) 0:00:28.039 ********** 2025-04-04 00:39:48.746688 | orchestrator | =============================================================================== 2025-04-04 00:39:48.747326 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 2.08s 2025-04-04 00:39:48.748327 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.00s 2025-04-04 00:39:48.749709 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.97s 2025-04-04 00:39:48.751018 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.94s 2025-04-04 00:39:48.751949 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.91s 2025-04-04 00:39:48.752898 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.86s 2025-04-04 00:39:48.753735 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.70s 2025-04-04 00:39:48.754933 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.60s 2025-04-04 00:39:48.755941 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.54s 2025-04-04 00:39:48.756672 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.45s 2025-04-04 00:39:48.757802 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.44s 2025-04-04 00:39:48.758511 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.41s 2025-04-04 00:39:48.759674 | orchestrator | osism.commons.network : Create required directories --------------------- 1.34s 2025-04-04 00:39:48.761153 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.09s 2025-04-04 00:39:48.761926 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.05s 2025-04-04 00:39:48.763308 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.93s 2025-04-04 00:39:48.764245 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.85s 2025-04-04 00:39:48.765196 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.83s 2025-04-04 00:39:48.766363 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.79s 2025-04-04 00:39:49.379254 | orchestrator | + osism apply wireguard 2025-04-04 00:39:50.931424 | orchestrator | 2025-04-04 00:39:50 | INFO  | Task 6d11f014-713e-41b5-a9cb-6100b1907256 (wireguard) was prepared for execution. 2025-04-04 00:39:54.444362 | orchestrator | 2025-04-04 00:39:50 | INFO  | It takes a moment until task 6d11f014-713e-41b5-a9cb-6100b1907256 (wireguard) has been started and output is visible here. 2025-04-04 00:39:54.444491 | orchestrator | 2025-04-04 00:39:54.445407 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-04-04 00:39:54.446422 | orchestrator | 2025-04-04 00:39:54.449771 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-04-04 00:39:54.451875 | orchestrator | Friday 04 April 2025 00:39:54 +0000 (0:00:00.184) 0:00:00.184 ********** 2025-04-04 00:39:56.119592 | orchestrator | ok: [testbed-manager] 2025-04-04 00:39:56.120421 | orchestrator | 2025-04-04 00:39:56.121260 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-04-04 00:39:56.122517 | orchestrator | Friday 04 April 2025 00:39:56 +0000 (0:00:01.675) 0:00:01.859 ********** 2025-04-04 00:40:03.652723 | orchestrator | changed: [testbed-manager] 2025-04-04 00:40:03.653256 | orchestrator | 2025-04-04 00:40:03.653542 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-04-04 00:40:03.654265 | orchestrator | Friday 04 April 2025 00:40:03 +0000 (0:00:07.534) 0:00:09.394 ********** 2025-04-04 00:40:04.286121 | orchestrator | changed: [testbed-manager] 2025-04-04 00:40:04.287507 | orchestrator | 2025-04-04 00:40:04.287803 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-04-04 00:40:04.312491 | orchestrator | Friday 04 April 2025 00:40:04 +0000 (0:00:00.633) 0:00:10.027 ********** 2025-04-04 00:40:04.757031 | orchestrator | changed: [testbed-manager] 2025-04-04 00:40:04.757142 | orchestrator | 2025-04-04 00:40:04.757488 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-04-04 00:40:04.758247 | orchestrator | Friday 04 April 2025 00:40:04 +0000 (0:00:00.470) 0:00:10.498 ********** 2025-04-04 00:40:05.303095 | orchestrator | ok: [testbed-manager] 2025-04-04 00:40:05.303622 | orchestrator | 2025-04-04 00:40:05.304466 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-04-04 00:40:05.305994 | orchestrator | Friday 04 April 2025 00:40:05 +0000 (0:00:00.546) 0:00:11.044 ********** 2025-04-04 00:40:05.900407 | orchestrator | ok: [testbed-manager] 2025-04-04 00:40:05.900665 | orchestrator | 2025-04-04 00:40:05.901117 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-04-04 00:40:05.901154 | orchestrator | Friday 04 April 2025 00:40:05 +0000 (0:00:00.597) 0:00:11.642 ********** 2025-04-04 00:40:06.354333 | orchestrator | ok: [testbed-manager] 2025-04-04 00:40:07.691433 | orchestrator | 2025-04-04 00:40:07.691549 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-04-04 00:40:07.691567 | orchestrator | Friday 04 April 2025 00:40:06 +0000 (0:00:00.451) 0:00:12.094 ********** 2025-04-04 00:40:07.691597 | orchestrator | changed: [testbed-manager] 2025-04-04 00:40:07.692470 | orchestrator | 2025-04-04 00:40:07.693101 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-04-04 00:40:07.694451 | orchestrator | Friday 04 April 2025 00:40:07 +0000 (0:00:01.337) 0:00:13.431 ********** 2025-04-04 00:40:08.619190 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-04 00:40:08.620209 | orchestrator | changed: [testbed-manager] 2025-04-04 00:40:08.620559 | orchestrator | 2025-04-04 00:40:08.621396 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-04-04 00:40:08.623378 | orchestrator | Friday 04 April 2025 00:40:08 +0000 (0:00:00.927) 0:00:14.359 ********** 2025-04-04 00:40:10.443355 | orchestrator | changed: [testbed-manager] 2025-04-04 00:40:10.443530 | orchestrator | 2025-04-04 00:40:10.443560 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-04-04 00:40:10.444025 | orchestrator | Friday 04 April 2025 00:40:10 +0000 (0:00:01.822) 0:00:16.181 ********** 2025-04-04 00:40:11.472361 | orchestrator | changed: [testbed-manager] 2025-04-04 00:40:11.472485 | orchestrator | 2025-04-04 00:40:11.472497 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:40:11.472508 | orchestrator | 2025-04-04 00:40:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:40:11.472546 | orchestrator | 2025-04-04 00:40:11 | INFO  | Please wait and do not abort execution. 2025-04-04 00:40:11.472558 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 00:40:11.472956 | orchestrator | 2025-04-04 00:40:11.472971 | orchestrator | Friday 04 April 2025 00:40:11 +0000 (0:00:01.028) 0:00:17.210 ********** 2025-04-04 00:40:11.473380 | orchestrator | =============================================================================== 2025-04-04 00:40:11.474136 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.53s 2025-04-04 00:40:11.475142 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.82s 2025-04-04 00:40:11.476189 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.68s 2025-04-04 00:40:11.476713 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.34s 2025-04-04 00:40:11.477093 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.03s 2025-04-04 00:40:11.477367 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.93s 2025-04-04 00:40:11.477380 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.63s 2025-04-04 00:40:11.478350 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.60s 2025-04-04 00:40:12.103712 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.55s 2025-04-04 00:40:12.103823 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.47s 2025-04-04 00:40:12.103869 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2025-04-04 00:40:12.103903 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-04-04 00:40:12.149823 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-04-04 00:40:12.226801 | orchestrator | Dload Upload Total Spent Left Speed 2025-04-04 00:40:12.226851 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 181 0 --:--:-- --:--:-- --:--:-- 181 2025-04-04 00:40:12.242605 | orchestrator | + osism apply --environment custom workarounds 2025-04-04 00:40:13.714787 | orchestrator | 2025-04-04 00:40:13 | INFO  | Trying to run play workarounds in environment custom 2025-04-04 00:40:13.767526 | orchestrator | 2025-04-04 00:40:13 | INFO  | Task 8ee79e64-a611-4cf0-af21-5128ca3801a9 (workarounds) was prepared for execution. 2025-04-04 00:40:17.191125 | orchestrator | 2025-04-04 00:40:13 | INFO  | It takes a moment until task 8ee79e64-a611-4cf0-af21-5128ca3801a9 (workarounds) has been started and output is visible here. 2025-04-04 00:40:17.191264 | orchestrator | 2025-04-04 00:40:17.193569 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-04 00:40:17.375505 | orchestrator | 2025-04-04 00:40:17.375556 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-04-04 00:40:17.375571 | orchestrator | Friday 04 April 2025 00:40:17 +0000 (0:00:00.146) 0:00:00.146 ********** 2025-04-04 00:40:17.375615 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-04-04 00:40:17.467985 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-04-04 00:40:17.559391 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-04-04 00:40:17.649546 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-04-04 00:40:17.752528 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-04-04 00:40:18.053492 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-04-04 00:40:18.054070 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-04-04 00:40:18.055199 | orchestrator | 2025-04-04 00:40:18.056840 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-04-04 00:40:18.057092 | orchestrator | 2025-04-04 00:40:18.057813 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-04 00:40:18.057849 | orchestrator | Friday 04 April 2025 00:40:18 +0000 (0:00:00.866) 0:00:01.012 ********** 2025-04-04 00:40:20.889910 | orchestrator | ok: [testbed-manager] 2025-04-04 00:40:20.891394 | orchestrator | 2025-04-04 00:40:20.892996 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-04-04 00:40:20.893037 | orchestrator | 2025-04-04 00:40:20.894260 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-04 00:40:20.896172 | orchestrator | Friday 04 April 2025 00:40:20 +0000 (0:00:02.830) 0:00:03.842 ********** 2025-04-04 00:40:22.620979 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:40:22.623668 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:40:22.623839 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:40:22.627386 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:40:22.630550 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:40:22.630629 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:40:22.630842 | orchestrator | 2025-04-04 00:40:22.631523 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-04-04 00:40:22.631941 | orchestrator | 2025-04-04 00:40:22.634359 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-04-04 00:40:22.637147 | orchestrator | Friday 04 April 2025 00:40:22 +0000 (0:00:01.729) 0:00:05.572 ********** 2025-04-04 00:40:24.160460 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-04 00:40:24.160650 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-04 00:40:24.160712 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-04 00:40:24.163076 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-04 00:40:24.167651 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-04 00:40:24.167685 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-04 00:40:27.538928 | orchestrator | 2025-04-04 00:40:27.539062 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-04-04 00:40:27.539083 | orchestrator | Friday 04 April 2025 00:40:24 +0000 (0:00:01.541) 0:00:07.113 ********** 2025-04-04 00:40:27.539115 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:40:27.540819 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:40:27.540890 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:40:27.543572 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:40:27.545376 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:40:27.546398 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:40:27.547376 | orchestrator | 2025-04-04 00:40:27.548192 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-04-04 00:40:27.549246 | orchestrator | Friday 04 April 2025 00:40:27 +0000 (0:00:03.382) 0:00:10.495 ********** 2025-04-04 00:40:27.705951 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:40:27.787608 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:40:27.879033 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:40:28.155574 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:40:28.292171 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:40:28.292419 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:40:28.293238 | orchestrator | 2025-04-04 00:40:28.293745 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-04-04 00:40:28.294401 | orchestrator | 2025-04-04 00:40:28.295049 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-04-04 00:40:28.296238 | orchestrator | Friday 04 April 2025 00:40:28 +0000 (0:00:00.753) 0:00:11.248 ********** 2025-04-04 00:40:30.008001 | orchestrator | changed: [testbed-manager] 2025-04-04 00:40:30.008488 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:40:30.008528 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:40:30.009425 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:40:30.010112 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:40:30.010479 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:40:30.013039 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:40:30.013679 | orchestrator | 2025-04-04 00:40:30.014396 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-04-04 00:40:30.015484 | orchestrator | Friday 04 April 2025 00:40:30 +0000 (0:00:01.715) 0:00:12.964 ********** 2025-04-04 00:40:31.607382 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:40:31.608254 | orchestrator | changed: [testbed-manager] 2025-04-04 00:40:31.608322 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:40:31.609070 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:40:31.609723 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:40:31.610954 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:40:31.612408 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:40:31.612477 | orchestrator | 2025-04-04 00:40:31.613435 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-04-04 00:40:31.614364 | orchestrator | Friday 04 April 2025 00:40:31 +0000 (0:00:01.594) 0:00:14.558 ********** 2025-04-04 00:40:33.292906 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:40:33.296033 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:40:33.296114 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:40:33.296135 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:40:33.298341 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:40:33.299182 | orchestrator | ok: [testbed-manager] 2025-04-04 00:40:33.300139 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:40:33.301808 | orchestrator | 2025-04-04 00:40:33.302403 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-04-04 00:40:33.302867 | orchestrator | Friday 04 April 2025 00:40:33 +0000 (0:00:01.690) 0:00:16.249 ********** 2025-04-04 00:40:35.061699 | orchestrator | changed: [testbed-manager] 2025-04-04 00:40:35.062085 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:40:35.062130 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:40:35.063281 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:40:35.064050 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:40:35.065264 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:40:35.065544 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:40:35.065575 | orchestrator | 2025-04-04 00:40:35.066053 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-04-04 00:40:35.066649 | orchestrator | Friday 04 April 2025 00:40:35 +0000 (0:00:01.764) 0:00:18.013 ********** 2025-04-04 00:40:35.289797 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:40:35.426767 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:40:35.570122 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:40:35.728402 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:40:36.177482 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:40:36.355216 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:40:36.356293 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:40:36.357806 | orchestrator | 2025-04-04 00:40:36.358706 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-04-04 00:40:36.359744 | orchestrator | 2025-04-04 00:40:36.360666 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-04-04 00:40:36.361393 | orchestrator | Friday 04 April 2025 00:40:36 +0000 (0:00:01.292) 0:00:19.306 ********** 2025-04-04 00:40:39.416802 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:40:39.416985 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:40:39.417013 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:40:39.418413 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:40:39.421387 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:40:39.421967 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:40:39.422457 | orchestrator | ok: [testbed-manager] 2025-04-04 00:40:39.422694 | orchestrator | 2025-04-04 00:40:39.422726 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:40:39.423486 | orchestrator | 2025-04-04 00:40:39 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:40:39.423955 | orchestrator | 2025-04-04 00:40:39 | INFO  | Please wait and do not abort execution. 2025-04-04 00:40:39.424001 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-04 00:40:39.424325 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:40:39.424511 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:40:39.424843 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:40:39.425619 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:40:39.425706 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:40:39.425730 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:40:39.425849 | orchestrator | 2025-04-04 00:40:39.426552 | orchestrator | Friday 04 April 2025 00:40:39 +0000 (0:00:03.065) 0:00:22.372 ********** 2025-04-04 00:40:39.426798 | orchestrator | =============================================================================== 2025-04-04 00:40:39.427021 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.38s 2025-04-04 00:40:39.427379 | orchestrator | Install python3-docker -------------------------------------------------- 3.07s 2025-04-04 00:40:39.428611 | orchestrator | Apply netplan configuration --------------------------------------------- 2.83s 2025-04-04 00:40:39.429543 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.76s 2025-04-04 00:40:39.429759 | orchestrator | Apply netplan configuration --------------------------------------------- 1.73s 2025-04-04 00:40:39.429784 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.72s 2025-04-04 00:40:39.429805 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.69s 2025-04-04 00:40:39.430115 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.59s 2025-04-04 00:40:39.432278 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.54s 2025-04-04 00:40:39.433256 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 1.29s 2025-04-04 00:40:39.433515 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.87s 2025-04-04 00:40:39.434360 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.75s 2025-04-04 00:40:40.460371 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-04-04 00:40:42.158927 | orchestrator | 2025-04-04 00:40:42 | INFO  | Task 9dcae16b-3c0e-47b7-b671-70a18b55d06e (reboot) was prepared for execution. 2025-04-04 00:40:45.568066 | orchestrator | 2025-04-04 00:40:42 | INFO  | It takes a moment until task 9dcae16b-3c0e-47b7-b671-70a18b55d06e (reboot) has been started and output is visible here. 2025-04-04 00:40:45.568234 | orchestrator | 2025-04-04 00:40:45.568946 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-04 00:40:45.569715 | orchestrator | 2025-04-04 00:40:45.571921 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-04 00:40:45.661262 | orchestrator | Friday 04 April 2025 00:40:45 +0000 (0:00:00.158) 0:00:00.158 ********** 2025-04-04 00:40:45.661400 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:40:45.661470 | orchestrator | 2025-04-04 00:40:45.663024 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-04 00:40:45.664624 | orchestrator | Friday 04 April 2025 00:40:45 +0000 (0:00:00.095) 0:00:00.254 ********** 2025-04-04 00:40:46.669908 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:40:46.670524 | orchestrator | 2025-04-04 00:40:46.671166 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-04 00:40:46.672532 | orchestrator | Friday 04 April 2025 00:40:46 +0000 (0:00:01.006) 0:00:01.260 ********** 2025-04-04 00:40:46.791346 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:40:46.793466 | orchestrator | 2025-04-04 00:40:46.793538 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-04 00:40:46.794460 | orchestrator | 2025-04-04 00:40:46.796117 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-04 00:40:46.796848 | orchestrator | Friday 04 April 2025 00:40:46 +0000 (0:00:00.121) 0:00:01.382 ********** 2025-04-04 00:40:46.913075 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:40:46.913842 | orchestrator | 2025-04-04 00:40:46.915839 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-04 00:40:46.917284 | orchestrator | Friday 04 April 2025 00:40:46 +0000 (0:00:00.123) 0:00:01.505 ********** 2025-04-04 00:40:47.652504 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:40:47.652707 | orchestrator | 2025-04-04 00:40:47.653296 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-04 00:40:47.653385 | orchestrator | Friday 04 April 2025 00:40:47 +0000 (0:00:00.739) 0:00:02.245 ********** 2025-04-04 00:40:47.794170 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:40:47.795626 | orchestrator | 2025-04-04 00:40:47.796616 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-04 00:40:47.799206 | orchestrator | 2025-04-04 00:40:47.799878 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-04 00:40:47.800756 | orchestrator | Friday 04 April 2025 00:40:47 +0000 (0:00:00.142) 0:00:02.387 ********** 2025-04-04 00:40:47.895591 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:40:47.898577 | orchestrator | 2025-04-04 00:40:47.899629 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-04 00:40:47.899904 | orchestrator | Friday 04 April 2025 00:40:47 +0000 (0:00:00.095) 0:00:02.482 ********** 2025-04-04 00:40:48.658361 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:40:48.658525 | orchestrator | 2025-04-04 00:40:48.659100 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-04 00:40:48.659561 | orchestrator | Friday 04 April 2025 00:40:48 +0000 (0:00:00.763) 0:00:03.245 ********** 2025-04-04 00:40:48.790634 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:40:48.791951 | orchestrator | 2025-04-04 00:40:48.792817 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-04 00:40:48.792851 | orchestrator | 2025-04-04 00:40:48.793666 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-04 00:40:48.794682 | orchestrator | Friday 04 April 2025 00:40:48 +0000 (0:00:00.136) 0:00:03.381 ********** 2025-04-04 00:40:48.900214 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:40:48.901689 | orchestrator | 2025-04-04 00:40:48.901726 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-04 00:40:48.902173 | orchestrator | Friday 04 April 2025 00:40:48 +0000 (0:00:00.109) 0:00:03.491 ********** 2025-04-04 00:40:49.582571 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:40:49.584869 | orchestrator | 2025-04-04 00:40:49.586183 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-04 00:40:49.587630 | orchestrator | Friday 04 April 2025 00:40:49 +0000 (0:00:00.683) 0:00:04.174 ********** 2025-04-04 00:40:49.717784 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:40:49.718742 | orchestrator | 2025-04-04 00:40:49.720333 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-04 00:40:49.721499 | orchestrator | 2025-04-04 00:40:49.722977 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-04 00:40:49.724752 | orchestrator | Friday 04 April 2025 00:40:49 +0000 (0:00:00.134) 0:00:04.309 ********** 2025-04-04 00:40:49.827001 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:40:49.828026 | orchestrator | 2025-04-04 00:40:49.829655 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-04 00:40:49.830919 | orchestrator | Friday 04 April 2025 00:40:49 +0000 (0:00:00.111) 0:00:04.420 ********** 2025-04-04 00:40:50.461388 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:40:50.462767 | orchestrator | 2025-04-04 00:40:50.463899 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-04 00:40:50.465304 | orchestrator | Friday 04 April 2025 00:40:50 +0000 (0:00:00.634) 0:00:05.054 ********** 2025-04-04 00:40:50.612947 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:40:50.613283 | orchestrator | 2025-04-04 00:40:50.614669 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-04 00:40:50.615800 | orchestrator | 2025-04-04 00:40:50.616609 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-04 00:40:50.619297 | orchestrator | Friday 04 April 2025 00:40:50 +0000 (0:00:00.148) 0:00:05.203 ********** 2025-04-04 00:40:50.730684 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:40:50.731067 | orchestrator | 2025-04-04 00:40:50.732790 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-04 00:40:50.733387 | orchestrator | Friday 04 April 2025 00:40:50 +0000 (0:00:00.117) 0:00:05.321 ********** 2025-04-04 00:40:51.336431 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:40:51.336626 | orchestrator | 2025-04-04 00:40:51.336654 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-04 00:40:51.337361 | orchestrator | Friday 04 April 2025 00:40:51 +0000 (0:00:00.608) 0:00:05.930 ********** 2025-04-04 00:40:51.369539 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:40:51.369883 | orchestrator | 2025-04-04 00:40:51.371564 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:40:51.371652 | orchestrator | 2025-04-04 00:40:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:40:51.372371 | orchestrator | 2025-04-04 00:40:51 | INFO  | Please wait and do not abort execution. 2025-04-04 00:40:51.372402 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:40:51.373865 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:40:51.373950 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:40:51.373968 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:40:51.373986 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:40:51.374539 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:40:51.374776 | orchestrator | 2025-04-04 00:40:51.375040 | orchestrator | Friday 04 April 2025 00:40:51 +0000 (0:00:00.031) 0:00:05.962 ********** 2025-04-04 00:40:51.375429 | orchestrator | =============================================================================== 2025-04-04 00:40:51.375595 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.44s 2025-04-04 00:40:51.375896 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.71s 2025-04-04 00:40:51.376178 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.65s 2025-04-04 00:40:51.974239 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-04-04 00:40:53.600989 | orchestrator | 2025-04-04 00:40:53 | INFO  | Task 37b9894a-926d-4b92-94a9-f8dceb3cf1a9 (wait-for-connection) was prepared for execution. 2025-04-04 00:40:57.087409 | orchestrator | 2025-04-04 00:40:53 | INFO  | It takes a moment until task 37b9894a-926d-4b92-94a9-f8dceb3cf1a9 (wait-for-connection) has been started and output is visible here. 2025-04-04 00:40:57.087561 | orchestrator | 2025-04-04 00:40:57.088042 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-04-04 00:40:57.089199 | orchestrator | 2025-04-04 00:40:57.091495 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-04-04 00:40:57.092697 | orchestrator | Friday 04 April 2025 00:40:57 +0000 (0:00:00.184) 0:00:00.184 ********** 2025-04-04 00:41:08.706470 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:41:08.706675 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:41:08.706698 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:41:08.706712 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:41:08.706725 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:41:08.706738 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:41:08.706750 | orchestrator | 2025-04-04 00:41:08.706765 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:41:08.706800 | orchestrator | 2025-04-04 00:41:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:41:08.711092 | orchestrator | 2025-04-04 00:41:08 | INFO  | Please wait and do not abort execution. 2025-04-04 00:41:08.711153 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 00:41:08.711681 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 00:41:08.711705 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 00:41:08.711719 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 00:41:08.711733 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 00:41:08.711746 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 00:41:08.711760 | orchestrator | 2025-04-04 00:41:08.711780 | orchestrator | Friday 04 April 2025 00:41:08 +0000 (0:00:11.620) 0:00:11.804 ********** 2025-04-04 00:41:08.712543 | orchestrator | =============================================================================== 2025-04-04 00:41:08.712573 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.62s 2025-04-04 00:41:09.352549 | orchestrator | + osism apply hddtemp 2025-04-04 00:41:11.003969 | orchestrator | 2025-04-04 00:41:11 | INFO  | Task 1fbb3027-ad45-4946-a4f2-d6df7b61fb64 (hddtemp) was prepared for execution. 2025-04-04 00:41:14.717569 | orchestrator | 2025-04-04 00:41:11 | INFO  | It takes a moment until task 1fbb3027-ad45-4946-a4f2-d6df7b61fb64 (hddtemp) has been started and output is visible here. 2025-04-04 00:41:14.717709 | orchestrator | 2025-04-04 00:41:14.717969 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-04-04 00:41:14.718225 | orchestrator | 2025-04-04 00:41:14.720293 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-04-04 00:41:14.722773 | orchestrator | Friday 04 April 2025 00:41:14 +0000 (0:00:00.243) 0:00:00.243 ********** 2025-04-04 00:41:14.872121 | orchestrator | ok: [testbed-manager] 2025-04-04 00:41:14.960230 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:41:15.051109 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:41:15.132738 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:41:15.212477 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:41:15.496069 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:41:15.496411 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:41:15.497688 | orchestrator | 2025-04-04 00:41:15.498547 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-04-04 00:41:15.499281 | orchestrator | Friday 04 April 2025 00:41:15 +0000 (0:00:00.776) 0:00:01.020 ********** 2025-04-04 00:41:16.828162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-04 00:41:16.832058 | orchestrator | 2025-04-04 00:41:16.832114 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-04-04 00:41:16.833972 | orchestrator | Friday 04 April 2025 00:41:16 +0000 (0:00:01.331) 0:00:02.352 ********** 2025-04-04 00:41:18.628832 | orchestrator | ok: [testbed-manager] 2025-04-04 00:41:18.629247 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:41:18.630885 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:41:18.630952 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:41:18.631701 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:41:18.632267 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:41:18.633007 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:41:18.634932 | orchestrator | 2025-04-04 00:41:18.635309 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-04-04 00:41:18.635807 | orchestrator | Friday 04 April 2025 00:41:18 +0000 (0:00:01.804) 0:00:04.156 ********** 2025-04-04 00:41:19.275686 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:41:19.370760 | orchestrator | changed: [testbed-manager] 2025-04-04 00:41:20.025743 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:41:20.026504 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:41:20.029961 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:41:20.030514 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:41:20.033366 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:41:20.036214 | orchestrator | 2025-04-04 00:41:20.036635 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-04-04 00:41:20.037374 | orchestrator | Friday 04 April 2025 00:41:20 +0000 (0:00:01.389) 0:00:05.545 ********** 2025-04-04 00:41:21.490158 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:41:21.490269 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:41:21.491261 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:41:21.491758 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:41:21.494603 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:41:21.495085 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:41:21.495096 | orchestrator | ok: [testbed-manager] 2025-04-04 00:41:21.495103 | orchestrator | 2025-04-04 00:41:21.495110 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-04-04 00:41:21.495119 | orchestrator | Friday 04 April 2025 00:41:21 +0000 (0:00:01.469) 0:00:07.015 ********** 2025-04-04 00:41:21.776889 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:41:21.867005 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:41:21.998559 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:41:22.085571 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:41:22.240705 | orchestrator | changed: [testbed-manager] 2025-04-04 00:41:22.242095 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:41:22.242993 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:41:22.243465 | orchestrator | 2025-04-04 00:41:22.244164 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-04-04 00:41:22.244408 | orchestrator | Friday 04 April 2025 00:41:22 +0000 (0:00:00.754) 0:00:07.769 ********** 2025-04-04 00:41:33.019952 | orchestrator | changed: [testbed-manager] 2025-04-04 00:41:33.020551 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:41:33.020620 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:41:33.020680 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:41:33.020698 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:41:33.020716 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:41:33.021420 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:41:33.022230 | orchestrator | 2025-04-04 00:41:33.022940 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-04-04 00:41:33.023669 | orchestrator | Friday 04 April 2025 00:41:33 +0000 (0:00:10.769) 0:00:18.539 ********** 2025-04-04 00:41:34.347948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-04 00:41:34.348165 | orchestrator | 2025-04-04 00:41:34.348559 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-04-04 00:41:34.348593 | orchestrator | Friday 04 April 2025 00:41:34 +0000 (0:00:01.334) 0:00:19.873 ********** 2025-04-04 00:41:36.283094 | orchestrator | changed: [testbed-node-2] 2025-04-04 00:41:36.283867 | orchestrator | changed: [testbed-node-1] 2025-04-04 00:41:36.285828 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:41:36.287320 | orchestrator | changed: [testbed-node-0] 2025-04-04 00:41:36.288676 | orchestrator | changed: [testbed-manager] 2025-04-04 00:41:36.289829 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:41:36.290434 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:41:36.291877 | orchestrator | 2025-04-04 00:41:36.293525 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:41:36.293618 | orchestrator | 2025-04-04 00:41:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:41:36.294524 | orchestrator | 2025-04-04 00:41:36 | INFO  | Please wait and do not abort execution. 2025-04-04 00:41:36.294561 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 00:41:36.295272 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-04 00:41:36.296228 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-04 00:41:36.296667 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-04 00:41:36.297148 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-04 00:41:36.297824 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-04 00:41:36.298218 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-04 00:41:36.298636 | orchestrator | 2025-04-04 00:41:36.299271 | orchestrator | Friday 04 April 2025 00:41:36 +0000 (0:00:01.934) 0:00:21.808 ********** 2025-04-04 00:41:36.299701 | orchestrator | =============================================================================== 2025-04-04 00:41:36.300233 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 10.77s 2025-04-04 00:41:36.300540 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.93s 2025-04-04 00:41:36.301053 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.80s 2025-04-04 00:41:36.301780 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.47s 2025-04-04 00:41:36.302219 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.39s 2025-04-04 00:41:36.302614 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.33s 2025-04-04 00:41:36.302897 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.33s 2025-04-04 00:41:36.303317 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.78s 2025-04-04 00:41:36.303620 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.75s 2025-04-04 00:41:37.004294 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-04-04 00:41:38.394266 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-04 00:41:38.394568 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-04 00:41:38.394600 | orchestrator | + local max_attempts=60 2025-04-04 00:41:38.394617 | orchestrator | + local name=ceph-ansible 2025-04-04 00:41:38.394632 | orchestrator | + local attempt_num=1 2025-04-04 00:41:38.394653 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-04 00:41:38.435500 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-04 00:41:38.436019 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-04 00:41:38.436046 | orchestrator | + local max_attempts=60 2025-04-04 00:41:38.436061 | orchestrator | + local name=kolla-ansible 2025-04-04 00:41:38.436076 | orchestrator | + local attempt_num=1 2025-04-04 00:41:38.436095 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-04 00:41:38.471601 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-04 00:41:38.517909 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-04 00:41:38.517962 | orchestrator | + local max_attempts=60 2025-04-04 00:41:38.517978 | orchestrator | + local name=osism-ansible 2025-04-04 00:41:38.517992 | orchestrator | + local attempt_num=1 2025-04-04 00:41:38.518007 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-04 00:41:38.518105 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-04 00:41:38.714945 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-04 00:41:38.715002 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-04 00:41:38.715028 | orchestrator | ARA in ceph-ansible already disabled. 2025-04-04 00:41:38.901664 | orchestrator | ARA in kolla-ansible already disabled. 2025-04-04 00:41:39.105993 | orchestrator | ARA in osism-ansible already disabled. 2025-04-04 00:41:39.277201 | orchestrator | ARA in osism-kubernetes already disabled. 2025-04-04 00:41:39.278280 | orchestrator | + osism apply gather-facts 2025-04-04 00:41:40.910779 | orchestrator | 2025-04-04 00:41:40 | INFO  | Task 74b51417-687d-4bc6-acc3-fd4d5e068b90 (gather-facts) was prepared for execution. 2025-04-04 00:41:44.450772 | orchestrator | 2025-04-04 00:41:40 | INFO  | It takes a moment until task 74b51417-687d-4bc6-acc3-fd4d5e068b90 (gather-facts) has been started and output is visible here. 2025-04-04 00:41:44.450940 | orchestrator | 2025-04-04 00:41:44.451024 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-04 00:41:44.451331 | orchestrator | 2025-04-04 00:41:44.451404 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-04 00:41:44.451891 | orchestrator | Friday 04 April 2025 00:41:44 +0000 (0:00:00.187) 0:00:00.187 ********** 2025-04-04 00:41:48.777291 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:41:48.777581 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:41:48.778373 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:41:48.781686 | orchestrator | ok: [testbed-manager] 2025-04-04 00:41:48.782319 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:41:48.782872 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:41:48.783497 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:41:48.783927 | orchestrator | 2025-04-04 00:41:48.784333 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-04 00:41:48.784893 | orchestrator | 2025-04-04 00:41:48.785297 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-04 00:41:48.785799 | orchestrator | Friday 04 April 2025 00:41:48 +0000 (0:00:04.327) 0:00:04.515 ********** 2025-04-04 00:41:48.968990 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:41:49.060222 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:41:49.146508 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:41:49.238092 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:41:49.324881 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:41:49.360962 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:41:49.361250 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:41:49.361507 | orchestrator | 2025-04-04 00:41:49.362475 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:41:49.363168 | orchestrator | 2025-04-04 00:41:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:41:49.363261 | orchestrator | 2025-04-04 00:41:49 | INFO  | Please wait and do not abort execution. 2025-04-04 00:41:49.365192 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-04 00:41:49.365628 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-04 00:41:49.365659 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-04 00:41:49.366289 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-04 00:41:49.366929 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-04 00:41:49.367417 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-04 00:41:49.367770 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-04 00:41:49.368543 | orchestrator | 2025-04-04 00:41:49.368950 | orchestrator | Friday 04 April 2025 00:41:49 +0000 (0:00:00.585) 0:00:05.100 ********** 2025-04-04 00:41:49.369034 | orchestrator | =============================================================================== 2025-04-04 00:41:49.369771 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.33s 2025-04-04 00:41:49.370391 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2025-04-04 00:41:50.129974 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-04-04 00:41:50.145810 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-04-04 00:41:50.160384 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-04-04 00:41:50.175974 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-04-04 00:41:50.197541 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-04-04 00:41:50.212082 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-04-04 00:41:50.230666 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-04-04 00:41:50.242909 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-04-04 00:41:50.255880 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-04-04 00:41:50.272775 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-04-04 00:41:50.287861 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-04-04 00:41:50.305120 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-04-04 00:41:50.325190 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-04-04 00:41:50.343855 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-04-04 00:41:50.357533 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-04-04 00:41:50.375741 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-04-04 00:41:50.391321 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-04-04 00:41:50.406956 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-04-04 00:41:50.421874 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-04-04 00:41:50.434806 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-04-04 00:41:50.452394 | orchestrator | + [[ false == \t\r\u\e ]] 2025-04-04 00:41:50.755826 | orchestrator | changed 2025-04-04 00:41:50.819105 | 2025-04-04 00:41:50.819227 | TASK [Deploy services] 2025-04-04 00:41:50.971983 | orchestrator | skipping: Conditional result was False 2025-04-04 00:41:50.983167 | 2025-04-04 00:41:50.983273 | TASK [Deploy in a nutshell] 2025-04-04 00:41:51.680656 | orchestrator | + set -e 2025-04-04 00:41:51.680823 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-04 00:41:51.680843 | orchestrator | ++ export INTERACTIVE=false 2025-04-04 00:41:51.680857 | orchestrator | ++ INTERACTIVE=false 2025-04-04 00:41:51.680896 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-04 00:41:51.680909 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-04 00:41:51.680926 | orchestrator | + source /opt/manager-vars.sh 2025-04-04 00:41:51.680971 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-04 00:41:51.680989 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-04 00:41:51.681000 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-04 00:41:51.681010 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-04 00:41:51.681019 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-04 00:41:51.681029 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-04 00:41:51.681038 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-04 00:41:51.681048 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-04 00:41:51.681057 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-04 00:41:51.681067 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-04 00:41:51.681076 | orchestrator | ++ export ARA=false 2025-04-04 00:41:51.681086 | orchestrator | ++ ARA=false 2025-04-04 00:41:51.681095 | orchestrator | ++ export TEMPEST=false 2025-04-04 00:41:51.681105 | orchestrator | ++ TEMPEST=false 2025-04-04 00:41:51.681114 | orchestrator | ++ export IS_ZUUL=true 2025-04-04 00:41:51.681123 | orchestrator | ++ IS_ZUUL=true 2025-04-04 00:41:51.681133 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-04-04 00:41:51.681143 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-04-04 00:41:51.681153 | orchestrator | ++ export EXTERNAL_API=false 2025-04-04 00:41:51.681163 | orchestrator | ++ EXTERNAL_API=false 2025-04-04 00:41:51.681172 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-04 00:41:51.681181 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-04 00:41:51.681190 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-04 00:41:51.681200 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-04 00:41:51.681209 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-04 00:41:51.681224 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-04 00:41:51.681239 | orchestrator | 2025-04-04 00:41:51.683180 | orchestrator | # PULL IMAGES 2025-04-04 00:41:51.683197 | orchestrator | 2025-04-04 00:41:51.683208 | orchestrator | + echo 2025-04-04 00:41:51.683217 | orchestrator | + echo '# PULL IMAGES' 2025-04-04 00:41:51.683227 | orchestrator | + echo 2025-04-04 00:41:51.683240 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-04 00:41:51.752149 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-04 00:41:53.347959 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-04-04 00:41:53.348089 | orchestrator | 2025-04-04 00:41:53 | INFO  | Trying to run play pull-images in environment custom 2025-04-04 00:41:53.396792 | orchestrator | 2025-04-04 00:41:53 | INFO  | Task 25c63d60-f4cd-405a-89b7-f75d7ecf483e (pull-images) was prepared for execution. 2025-04-04 00:41:56.945057 | orchestrator | 2025-04-04 00:41:53 | INFO  | It takes a moment until task 25c63d60-f4cd-405a-89b7-f75d7ecf483e (pull-images) has been started and output is visible here. 2025-04-04 00:41:56.945171 | orchestrator | 2025-04-04 00:41:56.947125 | orchestrator | PLAY [Pull images] ************************************************************* 2025-04-04 00:41:56.949142 | orchestrator | 2025-04-04 00:41:56.949523 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-04-04 00:41:56.950096 | orchestrator | Friday 04 April 2025 00:41:56 +0000 (0:00:00.154) 0:00:00.154 ********** 2025-04-04 00:42:25.805255 | orchestrator | changed: [testbed-manager] 2025-04-04 00:43:22.035523 | orchestrator | 2025-04-04 00:43:22.035693 | orchestrator | TASK [Pull other images] ******************************************************* 2025-04-04 00:43:22.035717 | orchestrator | Friday 04 April 2025 00:42:25 +0000 (0:00:28.859) 0:00:29.013 ********** 2025-04-04 00:43:22.035753 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-04-04 00:43:22.037961 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-04-04 00:43:22.037989 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-04-04 00:43:22.038004 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-04-04 00:43:22.038078 | orchestrator | changed: [testbed-manager] => (item=common) 2025-04-04 00:43:22.038134 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-04-04 00:43:22.038149 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-04-04 00:43:22.038168 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-04-04 00:43:22.038208 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-04-04 00:43:22.038223 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-04-04 00:43:22.038249 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-04-04 00:43:22.038316 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-04-04 00:43:22.038524 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-04-04 00:43:22.038898 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-04-04 00:43:22.039317 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-04-04 00:43:22.039607 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-04-04 00:43:22.039954 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-04-04 00:43:22.040588 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-04-04 00:43:22.044304 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-04-04 00:43:22.045387 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-04-04 00:43:22.045530 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-04-04 00:43:22.045552 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-04-04 00:43:22.045570 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-04-04 00:43:22.045591 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-04-04 00:43:22.045940 | orchestrator | 2025-04-04 00:43:22.046112 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:43:22.047013 | orchestrator | 2025-04-04 00:43:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:43:22.048042 | orchestrator | 2025-04-04 00:43:22 | INFO  | Please wait and do not abort execution. 2025-04-04 00:43:22.048254 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 00:43:22.048842 | orchestrator | 2025-04-04 00:43:22.049204 | orchestrator | Friday 04 April 2025 00:43:22 +0000 (0:00:56.230) 0:01:25.244 ********** 2025-04-04 00:43:22.049655 | orchestrator | =============================================================================== 2025-04-04 00:43:22.050118 | orchestrator | Pull other images ------------------------------------------------------ 56.23s 2025-04-04 00:43:22.053169 | orchestrator | Pull keystone image ---------------------------------------------------- 28.86s 2025-04-04 00:43:24.822263 | orchestrator | 2025-04-04 00:43:24 | INFO  | Trying to run play wipe-partitions in environment custom 2025-04-04 00:43:24.876034 | orchestrator | 2025-04-04 00:43:24 | INFO  | Task a82afc6e-31ac-4bb3-9825-1669646fb157 (wipe-partitions) was prepared for execution. 2025-04-04 00:43:28.467769 | orchestrator | 2025-04-04 00:43:24 | INFO  | It takes a moment until task a82afc6e-31ac-4bb3-9825-1669646fb157 (wipe-partitions) has been started and output is visible here. 2025-04-04 00:43:28.467924 | orchestrator | 2025-04-04 00:43:28.468569 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-04-04 00:43:28.468602 | orchestrator | 2025-04-04 00:43:28.470004 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-04-04 00:43:28.471570 | orchestrator | Friday 04 April 2025 00:43:28 +0000 (0:00:00.136) 0:00:00.136 ********** 2025-04-04 00:43:29.124996 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:43:29.125648 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:43:29.127951 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:43:29.129564 | orchestrator | 2025-04-04 00:43:29.131390 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-04-04 00:43:29.132184 | orchestrator | Friday 04 April 2025 00:43:29 +0000 (0:00:00.653) 0:00:00.790 ********** 2025-04-04 00:43:29.303882 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:43:29.417189 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:43:29.417716 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:43:29.417753 | orchestrator | 2025-04-04 00:43:29.418199 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-04-04 00:43:29.419121 | orchestrator | Friday 04 April 2025 00:43:29 +0000 (0:00:00.296) 0:00:01.087 ********** 2025-04-04 00:43:30.148804 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:43:30.151272 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:43:30.152601 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:43:30.153714 | orchestrator | 2025-04-04 00:43:30.155594 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-04-04 00:43:30.157160 | orchestrator | Friday 04 April 2025 00:43:30 +0000 (0:00:00.726) 0:00:01.814 ********** 2025-04-04 00:43:30.324099 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:43:30.454113 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:43:30.454780 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:43:30.454821 | orchestrator | 2025-04-04 00:43:30.455619 | orchestrator | TASK [Check device availability] *********************************************** 2025-04-04 00:43:30.456257 | orchestrator | Friday 04 April 2025 00:43:30 +0000 (0:00:00.311) 0:00:02.125 ********** 2025-04-04 00:43:31.850631 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-04 00:43:31.854255 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-04 00:43:31.854312 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-04 00:43:31.855608 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-04 00:43:31.855640 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-04 00:43:31.855655 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-04 00:43:31.855674 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-04 00:43:31.856679 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-04 00:43:31.857323 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-04 00:43:31.859328 | orchestrator | 2025-04-04 00:43:31.860738 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-04-04 00:43:31.861903 | orchestrator | Friday 04 April 2025 00:43:31 +0000 (0:00:01.360) 0:00:03.485 ********** 2025-04-04 00:43:33.127155 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-04-04 00:43:33.128243 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-04-04 00:43:33.128350 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-04-04 00:43:33.132784 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-04-04 00:43:33.135140 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-04-04 00:43:33.135329 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-04-04 00:43:33.135358 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-04-04 00:43:33.137136 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-04-04 00:43:33.137613 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-04-04 00:43:33.137825 | orchestrator | 2025-04-04 00:43:33.139727 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-04-04 00:43:33.142809 | orchestrator | Friday 04 April 2025 00:43:33 +0000 (0:00:01.306) 0:00:04.792 ********** 2025-04-04 00:43:35.282507 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-04 00:43:35.283733 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-04 00:43:35.284525 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-04 00:43:35.286163 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-04 00:43:35.288329 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-04 00:43:35.290860 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-04 00:43:35.290889 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-04 00:43:35.290909 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-04 00:43:35.291767 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-04 00:43:35.291798 | orchestrator | 2025-04-04 00:43:35.293233 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-04-04 00:43:35.293982 | orchestrator | Friday 04 April 2025 00:43:35 +0000 (0:00:02.127) 0:00:06.919 ********** 2025-04-04 00:43:35.918310 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:43:35.918937 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:43:35.918991 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:43:35.919014 | orchestrator | 2025-04-04 00:43:35.919562 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-04-04 00:43:35.920690 | orchestrator | Friday 04 April 2025 00:43:35 +0000 (0:00:00.657) 0:00:07.576 ********** 2025-04-04 00:43:36.624835 | orchestrator | changed: [testbed-node-3] 2025-04-04 00:43:36.626179 | orchestrator | changed: [testbed-node-4] 2025-04-04 00:43:36.628194 | orchestrator | changed: [testbed-node-5] 2025-04-04 00:43:36.630134 | orchestrator | 2025-04-04 00:43:36.630165 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:43:36.635044 | orchestrator | 2025-04-04 00:43:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:43:36.635635 | orchestrator | 2025-04-04 00:43:36 | INFO  | Please wait and do not abort execution. 2025-04-04 00:43:36.635665 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:43:36.638426 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:43:36.638773 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:43:36.638801 | orchestrator | 2025-04-04 00:43:36.638816 | orchestrator | Friday 04 April 2025 00:43:36 +0000 (0:00:00.712) 0:00:08.289 ********** 2025-04-04 00:43:36.638831 | orchestrator | =============================================================================== 2025-04-04 00:43:36.638847 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.13s 2025-04-04 00:43:36.638862 | orchestrator | Check device availability ----------------------------------------------- 1.36s 2025-04-04 00:43:36.638876 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.31s 2025-04-04 00:43:36.638890 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.73s 2025-04-04 00:43:36.638909 | orchestrator | Request device events from the kernel ----------------------------------- 0.71s 2025-04-04 00:43:36.639094 | orchestrator | Reload udev rules ------------------------------------------------------- 0.66s 2025-04-04 00:43:36.639378 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.65s 2025-04-04 00:43:36.640749 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.31s 2025-04-04 00:43:39.586876 | orchestrator | Remove all rook related logical devices --------------------------------- 0.30s 2025-04-04 00:43:39.587018 | orchestrator | 2025-04-04 00:43:39 | INFO  | Task 814c58e8-1837-4df8-8d1b-6edd8471a477 (facts) was prepared for execution. 2025-04-04 00:43:43.931544 | orchestrator | 2025-04-04 00:43:39 | INFO  | It takes a moment until task 814c58e8-1837-4df8-8d1b-6edd8471a477 (facts) has been started and output is visible here. 2025-04-04 00:43:43.931711 | orchestrator | 2025-04-04 00:43:43.931814 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-04 00:43:43.936404 | orchestrator | 2025-04-04 00:43:43.936582 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-04 00:43:43.937006 | orchestrator | Friday 04 April 2025 00:43:43 +0000 (0:00:00.241) 0:00:00.241 ********** 2025-04-04 00:43:45.048985 | orchestrator | ok: [testbed-manager] 2025-04-04 00:43:45.049565 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:43:45.049795 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:43:45.050269 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:43:45.050656 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:43:45.051028 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:43:45.053781 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:43:45.055459 | orchestrator | 2025-04-04 00:43:45.055854 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-04 00:43:45.055889 | orchestrator | Friday 04 April 2025 00:43:45 +0000 (0:00:01.114) 0:00:01.355 ********** 2025-04-04 00:43:45.224235 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:43:45.320519 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:43:45.413843 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:43:45.503586 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:43:45.585078 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:43:46.425031 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:43:46.425364 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:43:46.425826 | orchestrator | 2025-04-04 00:43:46.427276 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-04 00:43:46.427868 | orchestrator | 2025-04-04 00:43:46.428629 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-04 00:43:46.428990 | orchestrator | Friday 04 April 2025 00:43:46 +0000 (0:00:01.382) 0:00:02.738 ********** 2025-04-04 00:43:51.427610 | orchestrator | ok: [testbed-node-2] 2025-04-04 00:43:51.428057 | orchestrator | ok: [testbed-node-1] 2025-04-04 00:43:51.428659 | orchestrator | ok: [testbed-node-0] 2025-04-04 00:43:51.431301 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:43:51.432573 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:43:51.436966 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:43:51.437815 | orchestrator | ok: [testbed-manager] 2025-04-04 00:43:51.438934 | orchestrator | 2025-04-04 00:43:51.439973 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-04 00:43:51.440329 | orchestrator | 2025-04-04 00:43:51.441275 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-04 00:43:51.442353 | orchestrator | Friday 04 April 2025 00:43:51 +0000 (0:00:05.002) 0:00:07.740 ********** 2025-04-04 00:43:51.808273 | orchestrator | skipping: [testbed-manager] 2025-04-04 00:43:51.889953 | orchestrator | skipping: [testbed-node-0] 2025-04-04 00:43:51.991020 | orchestrator | skipping: [testbed-node-1] 2025-04-04 00:43:52.085325 | orchestrator | skipping: [testbed-node-2] 2025-04-04 00:43:52.186304 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:43:52.231112 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:43:52.233090 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:43:52.233215 | orchestrator | 2025-04-04 00:43:52.236077 | orchestrator | 2025-04-04 00:43:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:43:52.236152 | orchestrator | 2025-04-04 00:43:52 | INFO  | Please wait and do not abort execution. 2025-04-04 00:43:52.236174 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:43:52.237638 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:43:52.237862 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:43:52.242604 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:43:52.242636 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:43:52.244301 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:43:52.244325 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:43:52.244344 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 00:43:52.245984 | orchestrator | 2025-04-04 00:43:52.248505 | orchestrator | Friday 04 April 2025 00:43:52 +0000 (0:00:00.803) 0:00:08.544 ********** 2025-04-04 00:43:52.249256 | orchestrator | =============================================================================== 2025-04-04 00:43:52.251444 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.00s 2025-04-04 00:43:52.251913 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.38s 2025-04-04 00:43:52.252654 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2025-04-04 00:43:52.252945 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.80s 2025-04-04 00:43:55.082648 | orchestrator | 2025-04-04 00:43:55 | INFO  | Task 6c6f2ecc-92f6-49e4-b8bf-c29d675b0e71 (ceph-configure-lvm-volumes) was prepared for execution. 2025-04-04 00:43:59.086765 | orchestrator | 2025-04-04 00:43:55 | INFO  | It takes a moment until task 6c6f2ecc-92f6-49e4-b8bf-c29d675b0e71 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-04-04 00:43:59.086944 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-04 00:43:59.749234 | orchestrator | 2025-04-04 00:43:59.749395 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-04 00:43:59.749465 | orchestrator | 2025-04-04 00:43:59.751058 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-04 00:43:59.753531 | orchestrator | Friday 04 April 2025 00:43:59 +0000 (0:00:00.551) 0:00:00.551 ********** 2025-04-04 00:44:00.091914 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-04 00:44:00.095624 | orchestrator | 2025-04-04 00:44:00.096623 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-04 00:44:00.096652 | orchestrator | Friday 04 April 2025 00:44:00 +0000 (0:00:00.341) 0:00:00.893 ********** 2025-04-04 00:44:00.333050 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:44:00.333928 | orchestrator | 2025-04-04 00:44:00.334334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:00.334885 | orchestrator | Friday 04 April 2025 00:44:00 +0000 (0:00:00.243) 0:00:01.136 ********** 2025-04-04 00:44:00.933737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-04 00:44:00.933890 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-04 00:44:00.933911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-04 00:44:00.933929 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-04 00:44:00.934655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-04 00:44:00.934855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-04 00:44:00.935016 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-04 00:44:00.935307 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-04 00:44:00.935469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-04 00:44:00.935817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-04 00:44:00.937750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-04 00:44:00.937840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-04 00:44:00.939710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-04 00:44:00.939890 | orchestrator | 2025-04-04 00:44:00.940331 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:00.940656 | orchestrator | Friday 04 April 2025 00:44:00 +0000 (0:00:00.595) 0:00:01.732 ********** 2025-04-04 00:44:01.174862 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:01.175018 | orchestrator | 2025-04-04 00:44:01.175566 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:01.175743 | orchestrator | Friday 04 April 2025 00:44:01 +0000 (0:00:00.243) 0:00:01.975 ********** 2025-04-04 00:44:01.378267 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:01.380133 | orchestrator | 2025-04-04 00:44:01.380583 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:01.381262 | orchestrator | Friday 04 April 2025 00:44:01 +0000 (0:00:00.206) 0:00:02.181 ********** 2025-04-04 00:44:01.583639 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:01.584253 | orchestrator | 2025-04-04 00:44:01.584761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:01.585589 | orchestrator | Friday 04 April 2025 00:44:01 +0000 (0:00:00.200) 0:00:02.382 ********** 2025-04-04 00:44:01.808651 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:01.809220 | orchestrator | 2025-04-04 00:44:01.809251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:01.810690 | orchestrator | Friday 04 April 2025 00:44:01 +0000 (0:00:00.228) 0:00:02.611 ********** 2025-04-04 00:44:02.027162 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:02.028902 | orchestrator | 2025-04-04 00:44:02.030011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:02.030799 | orchestrator | Friday 04 April 2025 00:44:02 +0000 (0:00:00.220) 0:00:02.831 ********** 2025-04-04 00:44:02.263366 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:02.264890 | orchestrator | 2025-04-04 00:44:02.266705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:02.268057 | orchestrator | Friday 04 April 2025 00:44:02 +0000 (0:00:00.234) 0:00:03.066 ********** 2025-04-04 00:44:02.481343 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:02.483181 | orchestrator | 2025-04-04 00:44:02.485983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:02.683915 | orchestrator | Friday 04 April 2025 00:44:02 +0000 (0:00:00.217) 0:00:03.283 ********** 2025-04-04 00:44:02.684034 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:02.684755 | orchestrator | 2025-04-04 00:44:02.686268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:03.365464 | orchestrator | Friday 04 April 2025 00:44:02 +0000 (0:00:00.203) 0:00:03.487 ********** 2025-04-04 00:44:03.365612 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2d6bb235-b4a5-4107-827e-9430e4ec8db1) 2025-04-04 00:44:03.367096 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2d6bb235-b4a5-4107-827e-9430e4ec8db1) 2025-04-04 00:44:03.370485 | orchestrator | 2025-04-04 00:44:03.372714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:04.348608 | orchestrator | Friday 04 April 2025 00:44:03 +0000 (0:00:00.679) 0:00:04.167 ********** 2025-04-04 00:44:04.348756 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d452a44b-aae6-4065-a78c-9a36ae27c0a3) 2025-04-04 00:44:04.349035 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d452a44b-aae6-4065-a78c-9a36ae27c0a3) 2025-04-04 00:44:04.349731 | orchestrator | 2025-04-04 00:44:04.350528 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:04.351301 | orchestrator | Friday 04 April 2025 00:44:04 +0000 (0:00:00.984) 0:00:05.151 ********** 2025-04-04 00:44:04.850117 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7e1bb25e-090b-4c97-add7-925cccadf2fe) 2025-04-04 00:44:04.851930 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7e1bb25e-090b-4c97-add7-925cccadf2fe) 2025-04-04 00:44:04.854658 | orchestrator | 2025-04-04 00:44:04.855630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:04.858354 | orchestrator | Friday 04 April 2025 00:44:04 +0000 (0:00:00.498) 0:00:05.650 ********** 2025-04-04 00:44:05.400708 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5f8b0337-6fe2-4311-94ea-5b7abe02e48e) 2025-04-04 00:44:05.401913 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5f8b0337-6fe2-4311-94ea-5b7abe02e48e) 2025-04-04 00:44:05.401945 | orchestrator | 2025-04-04 00:44:05.401967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:05.402892 | orchestrator | Friday 04 April 2025 00:44:05 +0000 (0:00:00.552) 0:00:06.202 ********** 2025-04-04 00:44:05.798699 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-04 00:44:05.799139 | orchestrator | 2025-04-04 00:44:05.800882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:05.801619 | orchestrator | Friday 04 April 2025 00:44:05 +0000 (0:00:00.399) 0:00:06.602 ********** 2025-04-04 00:44:06.295387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-04 00:44:06.296556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-04 00:44:06.296606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-04 00:44:06.297281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-04 00:44:06.298926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-04 00:44:06.299904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-04 00:44:06.300321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-04 00:44:06.301104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-04 00:44:06.305703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-04 00:44:06.306087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-04 00:44:06.306358 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-04 00:44:06.307046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-04 00:44:06.307863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-04 00:44:06.307957 | orchestrator | 2025-04-04 00:44:06.308645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:06.308976 | orchestrator | Friday 04 April 2025 00:44:06 +0000 (0:00:00.496) 0:00:07.098 ********** 2025-04-04 00:44:06.553782 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:06.557759 | orchestrator | 2025-04-04 00:44:06.559016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:06.562259 | orchestrator | Friday 04 April 2025 00:44:06 +0000 (0:00:00.257) 0:00:07.355 ********** 2025-04-04 00:44:06.772940 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:06.773587 | orchestrator | 2025-04-04 00:44:06.774780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:06.775605 | orchestrator | Friday 04 April 2025 00:44:06 +0000 (0:00:00.221) 0:00:07.576 ********** 2025-04-04 00:44:06.998076 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:07.205320 | orchestrator | 2025-04-04 00:44:07.205372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:07.205390 | orchestrator | Friday 04 April 2025 00:44:06 +0000 (0:00:00.224) 0:00:07.801 ********** 2025-04-04 00:44:07.205470 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:07.207585 | orchestrator | 2025-04-04 00:44:07.207816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:07.208579 | orchestrator | Friday 04 April 2025 00:44:07 +0000 (0:00:00.206) 0:00:08.008 ********** 2025-04-04 00:44:07.859892 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:07.861182 | orchestrator | 2025-04-04 00:44:07.861247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:07.861389 | orchestrator | Friday 04 April 2025 00:44:07 +0000 (0:00:00.650) 0:00:08.658 ********** 2025-04-04 00:44:08.064021 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:08.064416 | orchestrator | 2025-04-04 00:44:08.065723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:08.066056 | orchestrator | Friday 04 April 2025 00:44:08 +0000 (0:00:00.209) 0:00:08.868 ********** 2025-04-04 00:44:08.299897 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:08.300112 | orchestrator | 2025-04-04 00:44:08.300140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:08.300197 | orchestrator | Friday 04 April 2025 00:44:08 +0000 (0:00:00.234) 0:00:09.102 ********** 2025-04-04 00:44:08.521343 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:08.521959 | orchestrator | 2025-04-04 00:44:08.522340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:08.523091 | orchestrator | Friday 04 April 2025 00:44:08 +0000 (0:00:00.220) 0:00:09.322 ********** 2025-04-04 00:44:09.299880 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-04 00:44:09.300071 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-04 00:44:09.300097 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-04 00:44:09.300976 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-04 00:44:09.302346 | orchestrator | 2025-04-04 00:44:09.302565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:09.303174 | orchestrator | Friday 04 April 2025 00:44:09 +0000 (0:00:00.781) 0:00:10.104 ********** 2025-04-04 00:44:09.515485 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:09.515955 | orchestrator | 2025-04-04 00:44:09.517606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:09.519802 | orchestrator | Friday 04 April 2025 00:44:09 +0000 (0:00:00.213) 0:00:10.317 ********** 2025-04-04 00:44:09.754898 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:09.755580 | orchestrator | 2025-04-04 00:44:09.756591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:09.759607 | orchestrator | Friday 04 April 2025 00:44:09 +0000 (0:00:00.240) 0:00:10.557 ********** 2025-04-04 00:44:09.986381 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:09.988220 | orchestrator | 2025-04-04 00:44:09.989116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:09.989212 | orchestrator | Friday 04 April 2025 00:44:09 +0000 (0:00:00.232) 0:00:10.790 ********** 2025-04-04 00:44:10.211602 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:10.212043 | orchestrator | 2025-04-04 00:44:10.212925 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-04 00:44:10.217012 | orchestrator | Friday 04 April 2025 00:44:10 +0000 (0:00:00.223) 0:00:11.013 ********** 2025-04-04 00:44:10.415597 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-04-04 00:44:10.418442 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-04-04 00:44:10.418906 | orchestrator | 2025-04-04 00:44:10.419343 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-04 00:44:10.422512 | orchestrator | Friday 04 April 2025 00:44:10 +0000 (0:00:00.205) 0:00:11.219 ********** 2025-04-04 00:44:10.587979 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:10.589044 | orchestrator | 2025-04-04 00:44:10.589080 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-04 00:44:10.589259 | orchestrator | Friday 04 April 2025 00:44:10 +0000 (0:00:00.170) 0:00:11.390 ********** 2025-04-04 00:44:10.972245 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:10.973046 | orchestrator | 2025-04-04 00:44:10.973467 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-04 00:44:10.973945 | orchestrator | Friday 04 April 2025 00:44:10 +0000 (0:00:00.386) 0:00:11.776 ********** 2025-04-04 00:44:11.120809 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:11.121003 | orchestrator | 2025-04-04 00:44:11.122000 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-04 00:44:11.122312 | orchestrator | Friday 04 April 2025 00:44:11 +0000 (0:00:00.148) 0:00:11.925 ********** 2025-04-04 00:44:11.271380 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:44:11.271690 | orchestrator | 2025-04-04 00:44:11.271764 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-04 00:44:11.272585 | orchestrator | Friday 04 April 2025 00:44:11 +0000 (0:00:00.150) 0:00:12.075 ********** 2025-04-04 00:44:11.481800 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9f0483a6-c41e-5563-949a-aef1708b660a'}}) 2025-04-04 00:44:11.482226 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'}}) 2025-04-04 00:44:11.483411 | orchestrator | 2025-04-04 00:44:11.659901 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-04 00:44:11.659988 | orchestrator | Friday 04 April 2025 00:44:11 +0000 (0:00:00.209) 0:00:12.285 ********** 2025-04-04 00:44:11.660017 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9f0483a6-c41e-5563-949a-aef1708b660a'}})  2025-04-04 00:44:11.660119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'}})  2025-04-04 00:44:11.660192 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:11.660654 | orchestrator | 2025-04-04 00:44:11.661909 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-04 00:44:11.662416 | orchestrator | Friday 04 April 2025 00:44:11 +0000 (0:00:00.176) 0:00:12.462 ********** 2025-04-04 00:44:11.838459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9f0483a6-c41e-5563-949a-aef1708b660a'}})  2025-04-04 00:44:11.843857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'}})  2025-04-04 00:44:11.845967 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:11.846000 | orchestrator | 2025-04-04 00:44:11.849093 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-04 00:44:11.849641 | orchestrator | Friday 04 April 2025 00:44:11 +0000 (0:00:00.179) 0:00:12.641 ********** 2025-04-04 00:44:12.014292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9f0483a6-c41e-5563-949a-aef1708b660a'}})  2025-04-04 00:44:12.017557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'}})  2025-04-04 00:44:12.017642 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:12.017666 | orchestrator | 2025-04-04 00:44:12.023621 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-04 00:44:12.169107 | orchestrator | Friday 04 April 2025 00:44:12 +0000 (0:00:00.176) 0:00:12.818 ********** 2025-04-04 00:44:12.169162 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:44:12.169655 | orchestrator | 2025-04-04 00:44:12.170399 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-04 00:44:12.171236 | orchestrator | Friday 04 April 2025 00:44:12 +0000 (0:00:00.155) 0:00:12.973 ********** 2025-04-04 00:44:12.313942 | orchestrator | ok: [testbed-node-3] 2025-04-04 00:44:12.314892 | orchestrator | 2025-04-04 00:44:12.321612 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-04 00:44:12.456958 | orchestrator | Friday 04 April 2025 00:44:12 +0000 (0:00:00.142) 0:00:13.116 ********** 2025-04-04 00:44:12.457028 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:12.458642 | orchestrator | 2025-04-04 00:44:12.458993 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-04 00:44:12.463255 | orchestrator | Friday 04 April 2025 00:44:12 +0000 (0:00:00.143) 0:00:13.259 ********** 2025-04-04 00:44:12.640401 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:12.641200 | orchestrator | 2025-04-04 00:44:12.642598 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-04 00:44:12.643510 | orchestrator | Friday 04 April 2025 00:44:12 +0000 (0:00:00.183) 0:00:13.442 ********** 2025-04-04 00:44:12.785334 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:12.785592 | orchestrator | 2025-04-04 00:44:12.786769 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-04 00:44:12.787555 | orchestrator | Friday 04 April 2025 00:44:12 +0000 (0:00:00.146) 0:00:13.588 ********** 2025-04-04 00:44:13.166811 | orchestrator | ok: [testbed-node-3] => { 2025-04-04 00:44:13.166989 | orchestrator |  "ceph_osd_devices": { 2025-04-04 00:44:13.167991 | orchestrator |  "sdb": { 2025-04-04 00:44:13.169135 | orchestrator |  "osd_lvm_uuid": "9f0483a6-c41e-5563-949a-aef1708b660a" 2025-04-04 00:44:13.169871 | orchestrator |  }, 2025-04-04 00:44:13.174512 | orchestrator |  "sdc": { 2025-04-04 00:44:13.175051 | orchestrator |  "osd_lvm_uuid": "23d8bf4a-a5da-5ae2-b325-0a959eaad2e5" 2025-04-04 00:44:13.177020 | orchestrator |  } 2025-04-04 00:44:13.177957 | orchestrator |  } 2025-04-04 00:44:13.179660 | orchestrator | } 2025-04-04 00:44:13.180771 | orchestrator | 2025-04-04 00:44:13.181973 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-04 00:44:13.182966 | orchestrator | Friday 04 April 2025 00:44:13 +0000 (0:00:00.381) 0:00:13.970 ********** 2025-04-04 00:44:13.355701 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:13.355779 | orchestrator | 2025-04-04 00:44:13.355803 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-04 00:44:13.355895 | orchestrator | Friday 04 April 2025 00:44:13 +0000 (0:00:00.188) 0:00:14.159 ********** 2025-04-04 00:44:13.508993 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:13.510103 | orchestrator | 2025-04-04 00:44:13.510953 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-04 00:44:13.512000 | orchestrator | Friday 04 April 2025 00:44:13 +0000 (0:00:00.153) 0:00:14.312 ********** 2025-04-04 00:44:13.664763 | orchestrator | skipping: [testbed-node-3] 2025-04-04 00:44:13.665218 | orchestrator | 2025-04-04 00:44:13.666145 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-04 00:44:13.666519 | orchestrator | Friday 04 April 2025 00:44:13 +0000 (0:00:00.155) 0:00:14.468 ********** 2025-04-04 00:44:13.972493 | orchestrator | changed: [testbed-node-3] => { 2025-04-04 00:44:13.973913 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-04 00:44:13.974789 | orchestrator |  "ceph_osd_devices": { 2025-04-04 00:44:13.975826 | orchestrator |  "sdb": { 2025-04-04 00:44:13.976615 | orchestrator |  "osd_lvm_uuid": "9f0483a6-c41e-5563-949a-aef1708b660a" 2025-04-04 00:44:13.983958 | orchestrator |  }, 2025-04-04 00:44:13.984599 | orchestrator |  "sdc": { 2025-04-04 00:44:13.984627 | orchestrator |  "osd_lvm_uuid": "23d8bf4a-a5da-5ae2-b325-0a959eaad2e5" 2025-04-04 00:44:13.984647 | orchestrator |  } 2025-04-04 00:44:13.984663 | orchestrator |  }, 2025-04-04 00:44:13.984679 | orchestrator |  "lvm_volumes": [ 2025-04-04 00:44:13.984695 | orchestrator |  { 2025-04-04 00:44:13.984717 | orchestrator |  "data": "osd-block-9f0483a6-c41e-5563-949a-aef1708b660a", 2025-04-04 00:44:13.985535 | orchestrator |  "data_vg": "ceph-9f0483a6-c41e-5563-949a-aef1708b660a" 2025-04-04 00:44:13.986651 | orchestrator |  }, 2025-04-04 00:44:13.986972 | orchestrator |  { 2025-04-04 00:44:13.987004 | orchestrator |  "data": "osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5", 2025-04-04 00:44:13.987665 | orchestrator |  "data_vg": "ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5" 2025-04-04 00:44:13.988325 | orchestrator |  } 2025-04-04 00:44:13.989067 | orchestrator |  ] 2025-04-04 00:44:13.989699 | orchestrator |  } 2025-04-04 00:44:13.990147 | orchestrator | } 2025-04-04 00:44:13.990912 | orchestrator | 2025-04-04 00:44:13.991553 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-04 00:44:13.992051 | orchestrator | Friday 04 April 2025 00:44:13 +0000 (0:00:00.307) 0:00:14.776 ********** 2025-04-04 00:44:16.440011 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-04 00:44:16.440583 | orchestrator | 2025-04-04 00:44:16.440620 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-04 00:44:16.440639 | orchestrator | 2025-04-04 00:44:16.440662 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-04 00:44:16.442698 | orchestrator | Friday 04 April 2025 00:44:16 +0000 (0:00:02.463) 0:00:17.239 ********** 2025-04-04 00:44:16.716329 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-04 00:44:16.717328 | orchestrator | 2025-04-04 00:44:16.717401 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-04 00:44:16.718059 | orchestrator | Friday 04 April 2025 00:44:16 +0000 (0:00:00.279) 0:00:17.519 ********** 2025-04-04 00:44:16.962391 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:44:16.962618 | orchestrator | 2025-04-04 00:44:16.963659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:16.969754 | orchestrator | Friday 04 April 2025 00:44:16 +0000 (0:00:00.244) 0:00:17.764 ********** 2025-04-04 00:44:17.421233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-04 00:44:17.421533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-04 00:44:17.421574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-04 00:44:17.421927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-04 00:44:17.423022 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-04 00:44:17.423078 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-04 00:44:17.423099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-04 00:44:17.423309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-04 00:44:17.425861 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-04 00:44:17.426996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-04 00:44:17.428170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-04 00:44:17.434612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-04 00:44:17.628050 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-04 00:44:17.628125 | orchestrator | 2025-04-04 00:44:17.628142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:17.628158 | orchestrator | Friday 04 April 2025 00:44:17 +0000 (0:00:00.461) 0:00:18.225 ********** 2025-04-04 00:44:17.628184 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:17.628395 | orchestrator | 2025-04-04 00:44:17.629475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:17.630202 | orchestrator | Friday 04 April 2025 00:44:17 +0000 (0:00:00.203) 0:00:18.429 ********** 2025-04-04 00:44:17.826310 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:17.832844 | orchestrator | 2025-04-04 00:44:17.835062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:17.835094 | orchestrator | Friday 04 April 2025 00:44:17 +0000 (0:00:00.200) 0:00:18.630 ********** 2025-04-04 00:44:18.037670 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:18.038931 | orchestrator | 2025-04-04 00:44:18.040878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:18.041354 | orchestrator | Friday 04 April 2025 00:44:18 +0000 (0:00:00.208) 0:00:18.838 ********** 2025-04-04 00:44:18.676871 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:18.678139 | orchestrator | 2025-04-04 00:44:18.680752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:18.896348 | orchestrator | Friday 04 April 2025 00:44:18 +0000 (0:00:00.639) 0:00:19.478 ********** 2025-04-04 00:44:18.896510 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:18.897686 | orchestrator | 2025-04-04 00:44:18.900645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:18.900693 | orchestrator | Friday 04 April 2025 00:44:18 +0000 (0:00:00.219) 0:00:19.698 ********** 2025-04-04 00:44:19.119832 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:19.125868 | orchestrator | 2025-04-04 00:44:19.130103 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:19.334811 | orchestrator | Friday 04 April 2025 00:44:19 +0000 (0:00:00.223) 0:00:19.921 ********** 2025-04-04 00:44:19.334908 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:19.335496 | orchestrator | 2025-04-04 00:44:19.336280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:19.336747 | orchestrator | Friday 04 April 2025 00:44:19 +0000 (0:00:00.217) 0:00:20.139 ********** 2025-04-04 00:44:19.570271 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:19.991492 | orchestrator | 2025-04-04 00:44:19.991599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:19.991616 | orchestrator | Friday 04 April 2025 00:44:19 +0000 (0:00:00.227) 0:00:20.367 ********** 2025-04-04 00:44:19.991646 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3aa6ea8b-0579-4a55-b42a-d9feea6f29a9) 2025-04-04 00:44:19.995045 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3aa6ea8b-0579-4a55-b42a-d9feea6f29a9) 2025-04-04 00:44:19.997495 | orchestrator | 2025-04-04 00:44:19.997630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:19.998618 | orchestrator | Friday 04 April 2025 00:44:19 +0000 (0:00:00.427) 0:00:20.794 ********** 2025-04-04 00:44:20.456566 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d959f59d-34fb-41af-b696-545de6cad1c5) 2025-04-04 00:44:20.459379 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d959f59d-34fb-41af-b696-545de6cad1c5) 2025-04-04 00:44:20.463517 | orchestrator | 2025-04-04 00:44:20.913823 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:20.913931 | orchestrator | Friday 04 April 2025 00:44:20 +0000 (0:00:00.463) 0:00:21.258 ********** 2025-04-04 00:44:20.913956 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b8c398ed-0e41-4b9f-9814-6176b4164583) 2025-04-04 00:44:20.916318 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b8c398ed-0e41-4b9f-9814-6176b4164583) 2025-04-04 00:44:20.917124 | orchestrator | 2025-04-04 00:44:20.917147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:20.917168 | orchestrator | Friday 04 April 2025 00:44:20 +0000 (0:00:00.456) 0:00:21.714 ********** 2025-04-04 00:44:21.380852 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3c72412b-1c42-4489-963e-1990a8f04f17) 2025-04-04 00:44:21.381244 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3c72412b-1c42-4489-963e-1990a8f04f17) 2025-04-04 00:44:21.382581 | orchestrator | 2025-04-04 00:44:21.383516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:21.387448 | orchestrator | Friday 04 April 2025 00:44:21 +0000 (0:00:00.469) 0:00:22.184 ********** 2025-04-04 00:44:22.016194 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-04 00:44:22.016778 | orchestrator | 2025-04-04 00:44:22.016973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:22.017077 | orchestrator | Friday 04 April 2025 00:44:22 +0000 (0:00:00.632) 0:00:22.817 ********** 2025-04-04 00:44:22.968182 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-04 00:44:22.968545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-04 00:44:22.968840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-04 00:44:22.971789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-04 00:44:22.971847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-04 00:44:22.973043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-04 00:44:22.973454 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-04 00:44:22.974334 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-04 00:44:22.974462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-04 00:44:22.975069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-04 00:44:22.976328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-04 00:44:22.977030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-04 00:44:22.978502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-04 00:44:22.979396 | orchestrator | 2025-04-04 00:44:22.980169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:22.981108 | orchestrator | Friday 04 April 2025 00:44:22 +0000 (0:00:00.952) 0:00:23.770 ********** 2025-04-04 00:44:23.183125 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:23.183749 | orchestrator | 2025-04-04 00:44:23.183861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:23.183880 | orchestrator | Friday 04 April 2025 00:44:23 +0000 (0:00:00.216) 0:00:23.987 ********** 2025-04-04 00:44:23.490230 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:23.491626 | orchestrator | 2025-04-04 00:44:23.491644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:23.494659 | orchestrator | Friday 04 April 2025 00:44:23 +0000 (0:00:00.304) 0:00:24.291 ********** 2025-04-04 00:44:23.706981 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:23.709338 | orchestrator | 2025-04-04 00:44:23.709360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:23.709580 | orchestrator | Friday 04 April 2025 00:44:23 +0000 (0:00:00.216) 0:00:24.508 ********** 2025-04-04 00:44:23.915218 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:23.915296 | orchestrator | 2025-04-04 00:44:23.916892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:23.917153 | orchestrator | Friday 04 April 2025 00:44:23 +0000 (0:00:00.209) 0:00:24.718 ********** 2025-04-04 00:44:24.157158 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:24.160078 | orchestrator | 2025-04-04 00:44:24.160116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:24.401786 | orchestrator | Friday 04 April 2025 00:44:24 +0000 (0:00:00.238) 0:00:24.956 ********** 2025-04-04 00:44:24.401846 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:24.401934 | orchestrator | 2025-04-04 00:44:24.401959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:24.402065 | orchestrator | Friday 04 April 2025 00:44:24 +0000 (0:00:00.249) 0:00:25.205 ********** 2025-04-04 00:44:24.615246 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:24.618785 | orchestrator | 2025-04-04 00:44:24.835479 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:24.835546 | orchestrator | Friday 04 April 2025 00:44:24 +0000 (0:00:00.210) 0:00:25.416 ********** 2025-04-04 00:44:24.835572 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:26.086239 | orchestrator | 2025-04-04 00:44:26.086403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:26.086421 | orchestrator | Friday 04 April 2025 00:44:24 +0000 (0:00:00.221) 0:00:25.638 ********** 2025-04-04 00:44:26.086469 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-04 00:44:26.086532 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-04 00:44:26.087752 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-04 00:44:26.088676 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-04 00:44:26.090210 | orchestrator | 2025-04-04 00:44:26.093184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:26.093208 | orchestrator | Friday 04 April 2025 00:44:26 +0000 (0:00:01.249) 0:00:26.888 ********** 2025-04-04 00:44:26.316819 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:26.317658 | orchestrator | 2025-04-04 00:44:26.317684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:26.317733 | orchestrator | Friday 04 April 2025 00:44:26 +0000 (0:00:00.231) 0:00:27.119 ********** 2025-04-04 00:44:26.568030 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:26.569132 | orchestrator | 2025-04-04 00:44:26.570115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:26.570808 | orchestrator | Friday 04 April 2025 00:44:26 +0000 (0:00:00.247) 0:00:27.367 ********** 2025-04-04 00:44:26.812128 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:26.813049 | orchestrator | 2025-04-04 00:44:26.814529 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:26.815349 | orchestrator | Friday 04 April 2025 00:44:26 +0000 (0:00:00.245) 0:00:27.612 ********** 2025-04-04 00:44:27.038727 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:27.039258 | orchestrator | 2025-04-04 00:44:27.040449 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-04 00:44:27.041549 | orchestrator | Friday 04 April 2025 00:44:27 +0000 (0:00:00.229) 0:00:27.841 ********** 2025-04-04 00:44:27.273272 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-04-04 00:44:27.275105 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-04-04 00:44:27.275555 | orchestrator | 2025-04-04 00:44:27.277776 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-04 00:44:27.430585 | orchestrator | Friday 04 April 2025 00:44:27 +0000 (0:00:00.234) 0:00:28.076 ********** 2025-04-04 00:44:27.430616 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:27.431877 | orchestrator | 2025-04-04 00:44:27.432670 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-04 00:44:27.434201 | orchestrator | Friday 04 April 2025 00:44:27 +0000 (0:00:00.156) 0:00:28.232 ********** 2025-04-04 00:44:27.575550 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:27.576501 | orchestrator | 2025-04-04 00:44:27.577586 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-04 00:44:27.580690 | orchestrator | Friday 04 April 2025 00:44:27 +0000 (0:00:00.146) 0:00:28.379 ********** 2025-04-04 00:44:27.740263 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:27.740988 | orchestrator | 2025-04-04 00:44:27.743733 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-04 00:44:27.745915 | orchestrator | Friday 04 April 2025 00:44:27 +0000 (0:00:00.160) 0:00:28.539 ********** 2025-04-04 00:44:27.886507 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:44:27.887041 | orchestrator | 2025-04-04 00:44:27.887777 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-04 00:44:27.888638 | orchestrator | Friday 04 April 2025 00:44:27 +0000 (0:00:00.150) 0:00:28.690 ********** 2025-04-04 00:44:28.088089 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8a501af7-3b2a-532c-a25d-8e5c367a167f'}}) 2025-04-04 00:44:28.089173 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'}}) 2025-04-04 00:44:28.091535 | orchestrator | 2025-04-04 00:44:28.091818 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-04 00:44:28.093883 | orchestrator | Friday 04 April 2025 00:44:28 +0000 (0:00:00.200) 0:00:28.891 ********** 2025-04-04 00:44:28.276695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8a501af7-3b2a-532c-a25d-8e5c367a167f'}})  2025-04-04 00:44:28.277604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'}})  2025-04-04 00:44:28.278780 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:28.279494 | orchestrator | 2025-04-04 00:44:28.280260 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-04 00:44:28.282686 | orchestrator | Friday 04 April 2025 00:44:28 +0000 (0:00:00.189) 0:00:29.080 ********** 2025-04-04 00:44:28.704271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8a501af7-3b2a-532c-a25d-8e5c367a167f'}})  2025-04-04 00:44:28.704692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'}})  2025-04-04 00:44:28.705548 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:28.707018 | orchestrator | 2025-04-04 00:44:28.707560 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-04 00:44:28.708620 | orchestrator | Friday 04 April 2025 00:44:28 +0000 (0:00:00.425) 0:00:29.506 ********** 2025-04-04 00:44:28.883803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8a501af7-3b2a-532c-a25d-8e5c367a167f'}})  2025-04-04 00:44:28.884272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'}})  2025-04-04 00:44:28.885290 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:28.885637 | orchestrator | 2025-04-04 00:44:28.886718 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-04 00:44:28.887499 | orchestrator | Friday 04 April 2025 00:44:28 +0000 (0:00:00.179) 0:00:29.686 ********** 2025-04-04 00:44:29.043763 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:44:29.044374 | orchestrator | 2025-04-04 00:44:29.045372 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-04 00:44:29.045796 | orchestrator | Friday 04 April 2025 00:44:29 +0000 (0:00:00.160) 0:00:29.846 ********** 2025-04-04 00:44:29.212071 | orchestrator | ok: [testbed-node-4] 2025-04-04 00:44:29.213100 | orchestrator | 2025-04-04 00:44:29.214469 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-04 00:44:29.216359 | orchestrator | Friday 04 April 2025 00:44:29 +0000 (0:00:00.168) 0:00:30.015 ********** 2025-04-04 00:44:29.360377 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:29.361810 | orchestrator | 2025-04-04 00:44:29.362233 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-04 00:44:29.364730 | orchestrator | Friday 04 April 2025 00:44:29 +0000 (0:00:00.148) 0:00:30.163 ********** 2025-04-04 00:44:29.520759 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:29.522100 | orchestrator | 2025-04-04 00:44:29.522128 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-04 00:44:29.523509 | orchestrator | Friday 04 April 2025 00:44:29 +0000 (0:00:00.159) 0:00:30.323 ********** 2025-04-04 00:44:29.666665 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:29.666865 | orchestrator | 2025-04-04 00:44:29.667265 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-04 00:44:29.668170 | orchestrator | Friday 04 April 2025 00:44:29 +0000 (0:00:00.147) 0:00:30.471 ********** 2025-04-04 00:44:29.822306 | orchestrator | ok: [testbed-node-4] => { 2025-04-04 00:44:29.823936 | orchestrator |  "ceph_osd_devices": { 2025-04-04 00:44:29.825709 | orchestrator |  "sdb": { 2025-04-04 00:44:29.826750 | orchestrator |  "osd_lvm_uuid": "8a501af7-3b2a-532c-a25d-8e5c367a167f" 2025-04-04 00:44:29.827425 | orchestrator |  }, 2025-04-04 00:44:29.827967 | orchestrator |  "sdc": { 2025-04-04 00:44:29.829404 | orchestrator |  "osd_lvm_uuid": "0aaa6bd4-fef5-5601-ad5e-b9ddf526c824" 2025-04-04 00:44:29.830082 | orchestrator |  } 2025-04-04 00:44:29.830819 | orchestrator |  } 2025-04-04 00:44:29.831292 | orchestrator | } 2025-04-04 00:44:29.831980 | orchestrator | 2025-04-04 00:44:29.832677 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-04 00:44:29.833599 | orchestrator | Friday 04 April 2025 00:44:29 +0000 (0:00:00.153) 0:00:30.624 ********** 2025-04-04 00:44:29.969307 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:29.972040 | orchestrator | 2025-04-04 00:44:29.973479 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-04 00:44:29.974820 | orchestrator | Friday 04 April 2025 00:44:29 +0000 (0:00:00.147) 0:00:30.771 ********** 2025-04-04 00:44:30.151241 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:30.151581 | orchestrator | 2025-04-04 00:44:30.152734 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-04 00:44:30.153073 | orchestrator | Friday 04 April 2025 00:44:30 +0000 (0:00:00.183) 0:00:30.954 ********** 2025-04-04 00:44:30.297991 | orchestrator | skipping: [testbed-node-4] 2025-04-04 00:44:30.300107 | orchestrator | 2025-04-04 00:44:30.300998 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-04 00:44:30.301830 | orchestrator | Friday 04 April 2025 00:44:30 +0000 (0:00:00.146) 0:00:31.101 ********** 2025-04-04 00:44:30.844621 | orchestrator | changed: [testbed-node-4] => { 2025-04-04 00:44:30.846428 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-04 00:44:30.847537 | orchestrator |  "ceph_osd_devices": { 2025-04-04 00:44:30.847883 | orchestrator |  "sdb": { 2025-04-04 00:44:30.849744 | orchestrator |  "osd_lvm_uuid": "8a501af7-3b2a-532c-a25d-8e5c367a167f" 2025-04-04 00:44:30.850373 | orchestrator |  }, 2025-04-04 00:44:30.850894 | orchestrator |  "sdc": { 2025-04-04 00:44:30.852184 | orchestrator |  "osd_lvm_uuid": "0aaa6bd4-fef5-5601-ad5e-b9ddf526c824" 2025-04-04 00:44:30.852645 | orchestrator |  } 2025-04-04 00:44:30.853190 | orchestrator |  }, 2025-04-04 00:44:30.854045 | orchestrator |  "lvm_volumes": [ 2025-04-04 00:44:30.854216 | orchestrator |  { 2025-04-04 00:44:30.854780 | orchestrator |  "data": "osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f", 2025-04-04 00:44:30.855162 | orchestrator |  "data_vg": "ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f" 2025-04-04 00:44:30.855689 | orchestrator |  }, 2025-04-04 00:44:30.856075 | orchestrator |  { 2025-04-04 00:44:30.857019 | orchestrator |  "data": "osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824", 2025-04-04 00:44:30.857490 | orchestrator |  "data_vg": "ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824" 2025-04-04 00:44:30.857510 | orchestrator |  } 2025-04-04 00:44:30.857844 | orchestrator |  ] 2025-04-04 00:44:30.858157 | orchestrator |  } 2025-04-04 00:44:30.858820 | orchestrator | } 2025-04-04 00:44:30.859142 | orchestrator | 2025-04-04 00:44:30.859492 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-04 00:44:30.859789 | orchestrator | Friday 04 April 2025 00:44:30 +0000 (0:00:00.544) 0:00:31.646 ********** 2025-04-04 00:44:32.360001 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-04 00:44:32.363145 | orchestrator | 2025-04-04 00:44:32.364604 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-04 00:44:32.364644 | orchestrator | 2025-04-04 00:44:32.365258 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-04 00:44:32.366552 | orchestrator | Friday 04 April 2025 00:44:32 +0000 (0:00:01.515) 0:00:33.161 ********** 2025-04-04 00:44:32.623524 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-04 00:44:32.623710 | orchestrator | 2025-04-04 00:44:32.624717 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-04 00:44:32.628612 | orchestrator | Friday 04 April 2025 00:44:32 +0000 (0:00:00.265) 0:00:33.426 ********** 2025-04-04 00:44:33.327475 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:44:33.328155 | orchestrator | 2025-04-04 00:44:33.329058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:33.330186 | orchestrator | Friday 04 April 2025 00:44:33 +0000 (0:00:00.702) 0:00:34.129 ********** 2025-04-04 00:44:33.767385 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-04 00:44:33.770550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-04 00:44:33.772818 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-04 00:44:33.772852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-04 00:44:33.773086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-04 00:44:33.773994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-04 00:44:33.774649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-04 00:44:33.775371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-04 00:44:33.775743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-04 00:44:33.775973 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-04 00:44:33.776625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-04 00:44:33.777416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-04 00:44:33.777903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-04 00:44:33.778821 | orchestrator | 2025-04-04 00:44:33.779462 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:33.779832 | orchestrator | Friday 04 April 2025 00:44:33 +0000 (0:00:00.439) 0:00:34.568 ********** 2025-04-04 00:44:33.991776 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:33.992579 | orchestrator | 2025-04-04 00:44:33.993216 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:33.994315 | orchestrator | Friday 04 April 2025 00:44:33 +0000 (0:00:00.226) 0:00:34.794 ********** 2025-04-04 00:44:34.226162 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:34.226292 | orchestrator | 2025-04-04 00:44:34.226603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:34.227063 | orchestrator | Friday 04 April 2025 00:44:34 +0000 (0:00:00.234) 0:00:35.029 ********** 2025-04-04 00:44:34.461087 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:34.461528 | orchestrator | 2025-04-04 00:44:34.461567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:34.461850 | orchestrator | Friday 04 April 2025 00:44:34 +0000 (0:00:00.234) 0:00:35.264 ********** 2025-04-04 00:44:34.689173 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:34.689968 | orchestrator | 2025-04-04 00:44:34.690813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:34.691634 | orchestrator | Friday 04 April 2025 00:44:34 +0000 (0:00:00.227) 0:00:35.492 ********** 2025-04-04 00:44:34.909158 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:34.910109 | orchestrator | 2025-04-04 00:44:34.912658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:35.118234 | orchestrator | Friday 04 April 2025 00:44:34 +0000 (0:00:00.218) 0:00:35.710 ********** 2025-04-04 00:44:35.118363 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:35.119131 | orchestrator | 2025-04-04 00:44:35.120168 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:35.121215 | orchestrator | Friday 04 April 2025 00:44:35 +0000 (0:00:00.210) 0:00:35.921 ********** 2025-04-04 00:44:35.362289 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:35.362800 | orchestrator | 2025-04-04 00:44:35.363508 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:35.364652 | orchestrator | Friday 04 April 2025 00:44:35 +0000 (0:00:00.243) 0:00:36.165 ********** 2025-04-04 00:44:35.568168 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:35.568598 | orchestrator | 2025-04-04 00:44:35.569612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:35.570774 | orchestrator | Friday 04 April 2025 00:44:35 +0000 (0:00:00.206) 0:00:36.371 ********** 2025-04-04 00:44:36.534301 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cf831ac4-ec72-4e5f-9ce6-19359424a886) 2025-04-04 00:44:36.535735 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cf831ac4-ec72-4e5f-9ce6-19359424a886) 2025-04-04 00:44:36.536930 | orchestrator | 2025-04-04 00:44:36.537571 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:36.539018 | orchestrator | Friday 04 April 2025 00:44:36 +0000 (0:00:00.964) 0:00:37.335 ********** 2025-04-04 00:44:37.010074 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_40370202-6a2d-4119-85f5-057a26d35c03) 2025-04-04 00:44:37.010669 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_40370202-6a2d-4119-85f5-057a26d35c03) 2025-04-04 00:44:37.010703 | orchestrator | 2025-04-04 00:44:37.010729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:37.011898 | orchestrator | Friday 04 April 2025 00:44:37 +0000 (0:00:00.472) 0:00:37.808 ********** 2025-04-04 00:44:37.465345 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9bee64ad-b5e3-4230-9d8c-a8a301110b73) 2025-04-04 00:44:37.465598 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9bee64ad-b5e3-4230-9d8c-a8a301110b73) 2025-04-04 00:44:37.466286 | orchestrator | 2025-04-04 00:44:37.920468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:37.920583 | orchestrator | Friday 04 April 2025 00:44:37 +0000 (0:00:00.460) 0:00:38.269 ********** 2025-04-04 00:44:37.920615 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_07b46a09-8c2f-4f65-be68-ae4e772e446d) 2025-04-04 00:44:37.922570 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_07b46a09-8c2f-4f65-be68-ae4e772e446d) 2025-04-04 00:44:37.923851 | orchestrator | 2025-04-04 00:44:37.925260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 00:44:37.926539 | orchestrator | Friday 04 April 2025 00:44:37 +0000 (0:00:00.452) 0:00:38.721 ********** 2025-04-04 00:44:38.275897 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-04 00:44:38.276920 | orchestrator | 2025-04-04 00:44:38.278605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:38.280917 | orchestrator | Friday 04 April 2025 00:44:38 +0000 (0:00:00.357) 0:00:39.079 ********** 2025-04-04 00:44:38.772185 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-04 00:44:38.775801 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-04 00:44:38.779776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-04 00:44:38.779829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-04 00:44:38.781024 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-04 00:44:38.781983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-04 00:44:38.782961 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-04 00:44:38.784191 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-04 00:44:38.785722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-04 00:44:38.786052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-04 00:44:38.787054 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-04 00:44:38.787767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-04 00:44:38.788994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-04 00:44:38.789429 | orchestrator | 2025-04-04 00:44:38.790347 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:38.791051 | orchestrator | Friday 04 April 2025 00:44:38 +0000 (0:00:00.495) 0:00:39.574 ********** 2025-04-04 00:44:39.017039 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:39.021383 | orchestrator | 2025-04-04 00:44:39.235093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:39.235141 | orchestrator | Friday 04 April 2025 00:44:39 +0000 (0:00:00.244) 0:00:39.819 ********** 2025-04-04 00:44:39.235164 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:39.236872 | orchestrator | 2025-04-04 00:44:39.238259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:39.239607 | orchestrator | Friday 04 April 2025 00:44:39 +0000 (0:00:00.219) 0:00:40.039 ********** 2025-04-04 00:44:39.482741 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:39.483716 | orchestrator | 2025-04-04 00:44:39.487328 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:39.488168 | orchestrator | Friday 04 April 2025 00:44:39 +0000 (0:00:00.245) 0:00:40.284 ********** 2025-04-04 00:44:40.202286 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:40.202613 | orchestrator | 2025-04-04 00:44:40.203010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:40.203552 | orchestrator | Friday 04 April 2025 00:44:40 +0000 (0:00:00.714) 0:00:40.999 ********** 2025-04-04 00:44:40.429640 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:40.429918 | orchestrator | 2025-04-04 00:44:40.430378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:40.431316 | orchestrator | Friday 04 April 2025 00:44:40 +0000 (0:00:00.232) 0:00:41.232 ********** 2025-04-04 00:44:40.664480 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:40.665389 | orchestrator | 2025-04-04 00:44:40.665429 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:40.666273 | orchestrator | Friday 04 April 2025 00:44:40 +0000 (0:00:00.235) 0:00:41.467 ********** 2025-04-04 00:44:40.898974 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:40.899948 | orchestrator | 2025-04-04 00:44:40.899992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:40.901562 | orchestrator | Friday 04 April 2025 00:44:40 +0000 (0:00:00.234) 0:00:41.702 ********** 2025-04-04 00:44:41.142846 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:41.143946 | orchestrator | 2025-04-04 00:44:41.145462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:41.146009 | orchestrator | Friday 04 April 2025 00:44:41 +0000 (0:00:00.243) 0:00:41.946 ********** 2025-04-04 00:44:41.880564 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-04 00:44:41.881822 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-04 00:44:41.883308 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-04 00:44:41.884720 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-04 00:44:41.885620 | orchestrator | 2025-04-04 00:44:41.887133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:41.888020 | orchestrator | Friday 04 April 2025 00:44:41 +0000 (0:00:00.737) 0:00:42.684 ********** 2025-04-04 00:44:42.094378 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:42.094610 | orchestrator | 2025-04-04 00:44:42.095453 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:42.096028 | orchestrator | Friday 04 April 2025 00:44:42 +0000 (0:00:00.212) 0:00:42.896 ********** 2025-04-04 00:44:42.344152 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:42.344744 | orchestrator | 2025-04-04 00:44:42.345324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:42.345718 | orchestrator | Friday 04 April 2025 00:44:42 +0000 (0:00:00.249) 0:00:43.145 ********** 2025-04-04 00:44:42.556378 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:42.556506 | orchestrator | 2025-04-04 00:44:42.557327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 00:44:42.557848 | orchestrator | Friday 04 April 2025 00:44:42 +0000 (0:00:00.214) 0:00:43.360 ********** 2025-04-04 00:44:42.797520 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:42.798498 | orchestrator | 2025-04-04 00:44:42.800789 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-04 00:44:42.801577 | orchestrator | Friday 04 April 2025 00:44:42 +0000 (0:00:00.240) 0:00:43.600 ********** 2025-04-04 00:44:43.243394 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-04-04 00:44:43.243765 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-04-04 00:44:43.245313 | orchestrator | 2025-04-04 00:44:43.246192 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-04 00:44:43.248284 | orchestrator | Friday 04 April 2025 00:44:43 +0000 (0:00:00.445) 0:00:44.045 ********** 2025-04-04 00:44:43.398248 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:43.398653 | orchestrator | 2025-04-04 00:44:43.398981 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-04 00:44:43.399899 | orchestrator | Friday 04 April 2025 00:44:43 +0000 (0:00:00.156) 0:00:44.202 ********** 2025-04-04 00:44:43.549474 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:43.549923 | orchestrator | 2025-04-04 00:44:43.549988 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-04 00:44:43.550089 | orchestrator | Friday 04 April 2025 00:44:43 +0000 (0:00:00.150) 0:00:44.352 ********** 2025-04-04 00:44:43.714226 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:43.714759 | orchestrator | 2025-04-04 00:44:43.715870 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-04 00:44:43.866375 | orchestrator | Friday 04 April 2025 00:44:43 +0000 (0:00:00.165) 0:00:44.518 ********** 2025-04-04 00:44:43.866475 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:44:43.867670 | orchestrator | 2025-04-04 00:44:43.867702 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-04 00:44:43.869068 | orchestrator | Friday 04 April 2025 00:44:43 +0000 (0:00:00.147) 0:00:44.666 ********** 2025-04-04 00:44:44.053511 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c80404de-2a7c-53fa-825b-8df99123a17e'}}) 2025-04-04 00:44:44.056075 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '722fade9-82b0-5f70-b367-45676e1969e2'}}) 2025-04-04 00:44:44.056424 | orchestrator | 2025-04-04 00:44:44.058372 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-04 00:44:44.059846 | orchestrator | Friday 04 April 2025 00:44:44 +0000 (0:00:00.187) 0:00:44.854 ********** 2025-04-04 00:44:44.243911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c80404de-2a7c-53fa-825b-8df99123a17e'}})  2025-04-04 00:44:44.244989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '722fade9-82b0-5f70-b367-45676e1969e2'}})  2025-04-04 00:44:44.245830 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:44.246495 | orchestrator | 2025-04-04 00:44:44.247101 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-04 00:44:44.247667 | orchestrator | Friday 04 April 2025 00:44:44 +0000 (0:00:00.193) 0:00:45.047 ********** 2025-04-04 00:44:44.428947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c80404de-2a7c-53fa-825b-8df99123a17e'}})  2025-04-04 00:44:44.429998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '722fade9-82b0-5f70-b367-45676e1969e2'}})  2025-04-04 00:44:44.430883 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:44.431596 | orchestrator | 2025-04-04 00:44:44.432193 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-04 00:44:44.433026 | orchestrator | Friday 04 April 2025 00:44:44 +0000 (0:00:00.184) 0:00:45.232 ********** 2025-04-04 00:44:44.602942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c80404de-2a7c-53fa-825b-8df99123a17e'}})  2025-04-04 00:44:44.603094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '722fade9-82b0-5f70-b367-45676e1969e2'}})  2025-04-04 00:44:44.603642 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:44.603673 | orchestrator | 2025-04-04 00:44:44.605686 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-04 00:44:44.605779 | orchestrator | Friday 04 April 2025 00:44:44 +0000 (0:00:00.171) 0:00:45.403 ********** 2025-04-04 00:44:44.782305 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:44:44.783139 | orchestrator | 2025-04-04 00:44:44.783560 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-04 00:44:44.784022 | orchestrator | Friday 04 April 2025 00:44:44 +0000 (0:00:00.182) 0:00:45.585 ********** 2025-04-04 00:44:44.953480 | orchestrator | ok: [testbed-node-5] 2025-04-04 00:44:44.954091 | orchestrator | 2025-04-04 00:44:44.955521 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-04 00:44:44.956274 | orchestrator | Friday 04 April 2025 00:44:44 +0000 (0:00:00.169) 0:00:45.755 ********** 2025-04-04 00:44:45.136318 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:45.137172 | orchestrator | 2025-04-04 00:44:45.138090 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-04 00:44:45.139188 | orchestrator | Friday 04 April 2025 00:44:45 +0000 (0:00:00.181) 0:00:45.936 ********** 2025-04-04 00:44:45.533496 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:45.533677 | orchestrator | 2025-04-04 00:44:45.534561 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-04 00:44:45.535869 | orchestrator | Friday 04 April 2025 00:44:45 +0000 (0:00:00.400) 0:00:46.337 ********** 2025-04-04 00:44:45.692831 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:45.693216 | orchestrator | 2025-04-04 00:44:45.694769 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-04 00:44:45.697842 | orchestrator | Friday 04 April 2025 00:44:45 +0000 (0:00:00.158) 0:00:46.495 ********** 2025-04-04 00:44:45.844954 | orchestrator | ok: [testbed-node-5] => { 2025-04-04 00:44:45.846517 | orchestrator |  "ceph_osd_devices": { 2025-04-04 00:44:45.847290 | orchestrator |  "sdb": { 2025-04-04 00:44:45.848843 | orchestrator |  "osd_lvm_uuid": "c80404de-2a7c-53fa-825b-8df99123a17e" 2025-04-04 00:44:45.849970 | orchestrator |  }, 2025-04-04 00:44:45.850085 | orchestrator |  "sdc": { 2025-04-04 00:44:45.851160 | orchestrator |  "osd_lvm_uuid": "722fade9-82b0-5f70-b367-45676e1969e2" 2025-04-04 00:44:45.852359 | orchestrator |  } 2025-04-04 00:44:45.852461 | orchestrator |  } 2025-04-04 00:44:45.853362 | orchestrator | } 2025-04-04 00:44:45.853561 | orchestrator | 2025-04-04 00:44:45.854218 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-04 00:44:45.854939 | orchestrator | Friday 04 April 2025 00:44:45 +0000 (0:00:00.152) 0:00:46.648 ********** 2025-04-04 00:44:46.006334 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:46.006532 | orchestrator | 2025-04-04 00:44:46.007466 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-04 00:44:46.009926 | orchestrator | Friday 04 April 2025 00:44:46 +0000 (0:00:00.161) 0:00:46.809 ********** 2025-04-04 00:44:46.164380 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:46.165638 | orchestrator | 2025-04-04 00:44:46.166530 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-04 00:44:46.167630 | orchestrator | Friday 04 April 2025 00:44:46 +0000 (0:00:00.158) 0:00:46.967 ********** 2025-04-04 00:44:46.349104 | orchestrator | skipping: [testbed-node-5] 2025-04-04 00:44:46.349408 | orchestrator | 2025-04-04 00:44:46.350691 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-04 00:44:46.352076 | orchestrator | Friday 04 April 2025 00:44:46 +0000 (0:00:00.184) 0:00:47.152 ********** 2025-04-04 00:44:46.681388 | orchestrator | changed: [testbed-node-5] => { 2025-04-04 00:44:46.681562 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-04 00:44:46.682685 | orchestrator |  "ceph_osd_devices": { 2025-04-04 00:44:46.683608 | orchestrator |  "sdb": { 2025-04-04 00:44:46.685105 | orchestrator |  "osd_lvm_uuid": "c80404de-2a7c-53fa-825b-8df99123a17e" 2025-04-04 00:44:46.685572 | orchestrator |  }, 2025-04-04 00:44:46.686996 | orchestrator |  "sdc": { 2025-04-04 00:44:46.687630 | orchestrator |  "osd_lvm_uuid": "722fade9-82b0-5f70-b367-45676e1969e2" 2025-04-04 00:44:46.688656 | orchestrator |  } 2025-04-04 00:44:46.689283 | orchestrator |  }, 2025-04-04 00:44:46.689978 | orchestrator |  "lvm_volumes": [ 2025-04-04 00:44:46.690346 | orchestrator |  { 2025-04-04 00:44:46.691349 | orchestrator |  "data": "osd-block-c80404de-2a7c-53fa-825b-8df99123a17e", 2025-04-04 00:44:46.692750 | orchestrator |  "data_vg": "ceph-c80404de-2a7c-53fa-825b-8df99123a17e" 2025-04-04 00:44:46.693093 | orchestrator |  }, 2025-04-04 00:44:46.693231 | orchestrator |  { 2025-04-04 00:44:46.693723 | orchestrator |  "data": "osd-block-722fade9-82b0-5f70-b367-45676e1969e2", 2025-04-04 00:44:46.694186 | orchestrator |  "data_vg": "ceph-722fade9-82b0-5f70-b367-45676e1969e2" 2025-04-04 00:44:46.695257 | orchestrator |  } 2025-04-04 00:44:46.695340 | orchestrator |  ] 2025-04-04 00:44:46.695970 | orchestrator |  } 2025-04-04 00:44:46.696378 | orchestrator | } 2025-04-04 00:44:46.697086 | orchestrator | 2025-04-04 00:44:46.697350 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-04 00:44:46.698563 | orchestrator | Friday 04 April 2025 00:44:46 +0000 (0:00:00.332) 0:00:47.484 ********** 2025-04-04 00:44:48.507925 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-04 00:44:48.508619 | orchestrator | 2025-04-04 00:44:48.514521 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 00:44:48.514572 | orchestrator | 2025-04-04 00:44:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 00:44:48.515582 | orchestrator | 2025-04-04 00:44:48 | INFO  | Please wait and do not abort execution. 2025-04-04 00:44:48.515615 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-04 00:44:48.518503 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-04 00:44:48.521205 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-04 00:44:48.521717 | orchestrator | 2025-04-04 00:44:48.521749 | orchestrator | 2025-04-04 00:44:48.522969 | orchestrator | 2025-04-04 00:44:48.523249 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-04 00:44:48.524085 | orchestrator | Friday 04 April 2025 00:44:48 +0000 (0:00:01.824) 0:00:49.308 ********** 2025-04-04 00:44:48.524815 | orchestrator | =============================================================================== 2025-04-04 00:44:48.525710 | orchestrator | Write configuration file ------------------------------------------------ 5.80s 2025-04-04 00:44:48.526393 | orchestrator | Add known partitions to the list of available block devices ------------- 1.94s 2025-04-04 00:44:48.527071 | orchestrator | Add known links to the list of available block devices ------------------ 1.50s 2025-04-04 00:44:48.528230 | orchestrator | Add known partitions to the list of available block devices ------------- 1.25s 2025-04-04 00:44:48.528600 | orchestrator | Get initial list of available block devices ----------------------------- 1.19s 2025-04-04 00:44:48.529032 | orchestrator | Print configuration data ------------------------------------------------ 1.18s 2025-04-04 00:44:48.529636 | orchestrator | Add known links to the list of available block devices ------------------ 0.98s 2025-04-04 00:44:48.530264 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2025-04-04 00:44:48.530801 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.89s 2025-04-04 00:44:48.531063 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.89s 2025-04-04 00:44:48.531431 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.79s 2025-04-04 00:44:48.532076 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2025-04-04 00:44:48.532610 | orchestrator | Set WAL devices config data --------------------------------------------- 0.74s 2025-04-04 00:44:48.533311 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-04-04 00:44:48.533400 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-04-04 00:44:48.534114 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.69s 2025-04-04 00:44:48.534354 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.68s 2025-04-04 00:44:48.534707 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-04-04 00:44:48.535043 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-04-04 00:44:48.535827 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-04-04 00:45:01.537666 | orchestrator | 2025-04-04 00:45:01 | INFO  | Task 33e9a38f-354e-4ac6-b847-534eaded50ab is running in background. Output coming soon. 2025-04-04 01:45:04.384930 | orchestrator | 2025-04-04 01:45:04 | INFO  | Task fb23ea8f-23a8-4a3b-b74e-77ce2d2fd3a4 (ceph-create-lvm-devices) was prepared for execution. 2025-04-04 01:45:07.851994 | orchestrator | 2025-04-04 01:45:04 | INFO  | It takes a moment until task fb23ea8f-23a8-4a3b-b74e-77ce2d2fd3a4 (ceph-create-lvm-devices) has been started and output is visible here. 2025-04-04 01:45:07.852161 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-04 01:45:08.432185 | orchestrator | 2025-04-04 01:45:08.432918 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-04 01:45:08.433944 | orchestrator | 2025-04-04 01:45:08.435098 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-04 01:45:08.437852 | orchestrator | Friday 04 April 2025 01:45:08 +0000 (0:00:00.508) 0:00:00.508 ********** 2025-04-04 01:45:08.698227 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-04 01:45:08.699438 | orchestrator | 2025-04-04 01:45:08.700451 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-04 01:45:08.701661 | orchestrator | Friday 04 April 2025 01:45:08 +0000 (0:00:00.266) 0:00:00.774 ********** 2025-04-04 01:45:08.952780 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:45:08.954198 | orchestrator | 2025-04-04 01:45:08.955424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:08.956269 | orchestrator | Friday 04 April 2025 01:45:08 +0000 (0:00:00.252) 0:00:01.027 ********** 2025-04-04 01:45:09.834996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-04 01:45:09.836144 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-04 01:45:09.836181 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-04 01:45:09.838656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-04 01:45:09.839240 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-04 01:45:09.839272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-04 01:45:09.839924 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-04 01:45:09.840485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-04 01:45:09.841172 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-04 01:45:09.841730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-04 01:45:09.842658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-04 01:45:09.842929 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-04 01:45:09.843716 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-04 01:45:09.844203 | orchestrator | 2025-04-04 01:45:09.844252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:09.844568 | orchestrator | Friday 04 April 2025 01:45:09 +0000 (0:00:00.883) 0:00:01.911 ********** 2025-04-04 01:45:10.072181 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:10.072857 | orchestrator | 2025-04-04 01:45:10.073994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:10.075325 | orchestrator | Friday 04 April 2025 01:45:10 +0000 (0:00:00.235) 0:00:02.147 ********** 2025-04-04 01:45:10.331372 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:10.331840 | orchestrator | 2025-04-04 01:45:10.332791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:10.333270 | orchestrator | Friday 04 April 2025 01:45:10 +0000 (0:00:00.261) 0:00:02.408 ********** 2025-04-04 01:45:10.577052 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:10.578542 | orchestrator | 2025-04-04 01:45:10.578578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:10.579366 | orchestrator | Friday 04 April 2025 01:45:10 +0000 (0:00:00.242) 0:00:02.650 ********** 2025-04-04 01:45:10.787099 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:10.787488 | orchestrator | 2025-04-04 01:45:10.788434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:10.789173 | orchestrator | Friday 04 April 2025 01:45:10 +0000 (0:00:00.211) 0:00:02.861 ********** 2025-04-04 01:45:11.031605 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:11.033699 | orchestrator | 2025-04-04 01:45:11.034536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:11.034653 | orchestrator | Friday 04 April 2025 01:45:11 +0000 (0:00:00.245) 0:00:03.107 ********** 2025-04-04 01:45:11.265320 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:11.265560 | orchestrator | 2025-04-04 01:45:11.267232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:11.268168 | orchestrator | Friday 04 April 2025 01:45:11 +0000 (0:00:00.233) 0:00:03.340 ********** 2025-04-04 01:45:11.501269 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:11.501500 | orchestrator | 2025-04-04 01:45:11.502101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:11.502172 | orchestrator | Friday 04 April 2025 01:45:11 +0000 (0:00:00.236) 0:00:03.577 ********** 2025-04-04 01:45:11.719529 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:11.720092 | orchestrator | 2025-04-04 01:45:11.720953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:11.721593 | orchestrator | Friday 04 April 2025 01:45:11 +0000 (0:00:00.219) 0:00:03.796 ********** 2025-04-04 01:45:12.473140 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2d6bb235-b4a5-4107-827e-9430e4ec8db1) 2025-04-04 01:45:12.473430 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2d6bb235-b4a5-4107-827e-9430e4ec8db1) 2025-04-04 01:45:12.473503 | orchestrator | 2025-04-04 01:45:12.473975 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:12.474520 | orchestrator | Friday 04 April 2025 01:45:12 +0000 (0:00:00.752) 0:00:04.548 ********** 2025-04-04 01:45:13.237695 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d452a44b-aae6-4065-a78c-9a36ae27c0a3) 2025-04-04 01:45:13.238058 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d452a44b-aae6-4065-a78c-9a36ae27c0a3) 2025-04-04 01:45:13.239656 | orchestrator | 2025-04-04 01:45:13.240747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:13.241461 | orchestrator | Friday 04 April 2025 01:45:13 +0000 (0:00:00.765) 0:00:05.314 ********** 2025-04-04 01:45:13.772755 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7e1bb25e-090b-4c97-add7-925cccadf2fe) 2025-04-04 01:45:13.773019 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7e1bb25e-090b-4c97-add7-925cccadf2fe) 2025-04-04 01:45:13.777938 | orchestrator | 2025-04-04 01:45:14.302673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:14.302784 | orchestrator | Friday 04 April 2025 01:45:13 +0000 (0:00:00.535) 0:00:05.849 ********** 2025-04-04 01:45:14.302814 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5f8b0337-6fe2-4311-94ea-5b7abe02e48e) 2025-04-04 01:45:14.303286 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5f8b0337-6fe2-4311-94ea-5b7abe02e48e) 2025-04-04 01:45:14.305534 | orchestrator | 2025-04-04 01:45:14.305856 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:14.306776 | orchestrator | Friday 04 April 2025 01:45:14 +0000 (0:00:00.528) 0:00:06.377 ********** 2025-04-04 01:45:14.774461 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-04 01:45:14.774600 | orchestrator | 2025-04-04 01:45:14.774851 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:14.775737 | orchestrator | Friday 04 April 2025 01:45:14 +0000 (0:00:00.473) 0:00:06.851 ********** 2025-04-04 01:45:15.341286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-04 01:45:15.342476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-04 01:45:15.342517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-04 01:45:15.342777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-04 01:45:15.342808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-04 01:45:15.343406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-04 01:45:15.346516 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-04 01:45:15.563821 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-04 01:45:15.563951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-04 01:45:15.563977 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-04 01:45:15.564002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-04 01:45:15.564026 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-04 01:45:15.564051 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-04 01:45:15.564075 | orchestrator | 2025-04-04 01:45:15.564100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:15.564125 | orchestrator | Friday 04 April 2025 01:45:15 +0000 (0:00:00.565) 0:00:07.417 ********** 2025-04-04 01:45:15.564203 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:15.564619 | orchestrator | 2025-04-04 01:45:15.564668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:15.565923 | orchestrator | Friday 04 April 2025 01:45:15 +0000 (0:00:00.222) 0:00:07.639 ********** 2025-04-04 01:45:15.795028 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:15.795672 | orchestrator | 2025-04-04 01:45:15.795967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:15.796519 | orchestrator | Friday 04 April 2025 01:45:15 +0000 (0:00:00.232) 0:00:07.872 ********** 2025-04-04 01:45:16.050568 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:16.051284 | orchestrator | 2025-04-04 01:45:16.051685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:16.052789 | orchestrator | Friday 04 April 2025 01:45:16 +0000 (0:00:00.254) 0:00:08.126 ********** 2025-04-04 01:45:16.282983 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:16.283875 | orchestrator | 2025-04-04 01:45:16.284531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:16.286475 | orchestrator | Friday 04 April 2025 01:45:16 +0000 (0:00:00.232) 0:00:08.359 ********** 2025-04-04 01:45:16.954738 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:16.956059 | orchestrator | 2025-04-04 01:45:16.956133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:16.956997 | orchestrator | Friday 04 April 2025 01:45:16 +0000 (0:00:00.672) 0:00:09.031 ********** 2025-04-04 01:45:17.176314 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:17.178506 | orchestrator | 2025-04-04 01:45:17.179410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:17.180372 | orchestrator | Friday 04 April 2025 01:45:17 +0000 (0:00:00.219) 0:00:09.251 ********** 2025-04-04 01:45:17.397782 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:17.400136 | orchestrator | 2025-04-04 01:45:17.400885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:17.401631 | orchestrator | Friday 04 April 2025 01:45:17 +0000 (0:00:00.224) 0:00:09.475 ********** 2025-04-04 01:45:17.609565 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:17.610321 | orchestrator | 2025-04-04 01:45:17.610968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:17.611759 | orchestrator | Friday 04 April 2025 01:45:17 +0000 (0:00:00.210) 0:00:09.686 ********** 2025-04-04 01:45:18.385038 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-04 01:45:18.386527 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-04 01:45:18.386572 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-04 01:45:18.388009 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-04 01:45:18.389563 | orchestrator | 2025-04-04 01:45:18.390445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:18.391466 | orchestrator | Friday 04 April 2025 01:45:18 +0000 (0:00:00.774) 0:00:10.461 ********** 2025-04-04 01:45:18.623611 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:18.625445 | orchestrator | 2025-04-04 01:45:18.626247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:18.854513 | orchestrator | Friday 04 April 2025 01:45:18 +0000 (0:00:00.239) 0:00:10.700 ********** 2025-04-04 01:45:18.854617 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:18.855120 | orchestrator | 2025-04-04 01:45:18.855993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:19.062578 | orchestrator | Friday 04 April 2025 01:45:18 +0000 (0:00:00.231) 0:00:10.932 ********** 2025-04-04 01:45:19.062684 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:19.062747 | orchestrator | 2025-04-04 01:45:19.062770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:19.063226 | orchestrator | Friday 04 April 2025 01:45:19 +0000 (0:00:00.207) 0:00:11.139 ********** 2025-04-04 01:45:19.281298 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:19.281709 | orchestrator | 2025-04-04 01:45:19.282895 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-04 01:45:19.283506 | orchestrator | Friday 04 April 2025 01:45:19 +0000 (0:00:00.218) 0:00:11.358 ********** 2025-04-04 01:45:19.473184 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:19.473870 | orchestrator | 2025-04-04 01:45:19.474770 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-04 01:45:19.475649 | orchestrator | Friday 04 April 2025 01:45:19 +0000 (0:00:00.191) 0:00:11.550 ********** 2025-04-04 01:45:20.006205 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9f0483a6-c41e-5563-949a-aef1708b660a'}}) 2025-04-04 01:45:20.006633 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'}}) 2025-04-04 01:45:20.007537 | orchestrator | 2025-04-04 01:45:20.008021 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-04 01:45:20.008778 | orchestrator | Friday 04 April 2025 01:45:20 +0000 (0:00:00.532) 0:00:12.083 ********** 2025-04-04 01:45:22.109848 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'}) 2025-04-04 01:45:22.112626 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'}) 2025-04-04 01:45:22.114083 | orchestrator | 2025-04-04 01:45:22.115677 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-04 01:45:22.116485 | orchestrator | Friday 04 April 2025 01:45:22 +0000 (0:00:02.102) 0:00:14.185 ********** 2025-04-04 01:45:22.305090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:22.305705 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:22.306690 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:22.307600 | orchestrator | 2025-04-04 01:45:22.309473 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-04 01:45:22.310777 | orchestrator | Friday 04 April 2025 01:45:22 +0000 (0:00:00.195) 0:00:14.381 ********** 2025-04-04 01:45:23.692384 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'}) 2025-04-04 01:45:23.693678 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'}) 2025-04-04 01:45:23.904017 | orchestrator | 2025-04-04 01:45:23.904128 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-04 01:45:23.904915 | orchestrator | Friday 04 April 2025 01:45:23 +0000 (0:00:01.387) 0:00:15.769 ********** 2025-04-04 01:45:23.905009 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:23.905623 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:23.906925 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:23.908069 | orchestrator | 2025-04-04 01:45:23.909063 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-04 01:45:23.909976 | orchestrator | Friday 04 April 2025 01:45:23 +0000 (0:00:00.211) 0:00:15.980 ********** 2025-04-04 01:45:24.088992 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:24.090556 | orchestrator | 2025-04-04 01:45:24.090592 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-04 01:45:24.296731 | orchestrator | Friday 04 April 2025 01:45:24 +0000 (0:00:00.178) 0:00:16.159 ********** 2025-04-04 01:45:24.296853 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:24.297826 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:24.298093 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:24.299895 | orchestrator | 2025-04-04 01:45:24.300265 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-04 01:45:24.300766 | orchestrator | Friday 04 April 2025 01:45:24 +0000 (0:00:00.213) 0:00:16.373 ********** 2025-04-04 01:45:24.469192 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:24.655789 | orchestrator | 2025-04-04 01:45:24.655842 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-04 01:45:24.655859 | orchestrator | Friday 04 April 2025 01:45:24 +0000 (0:00:00.169) 0:00:16.542 ********** 2025-04-04 01:45:24.655883 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:24.657125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:24.658716 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:24.660550 | orchestrator | 2025-04-04 01:45:24.661469 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-04 01:45:24.662175 | orchestrator | Friday 04 April 2025 01:45:24 +0000 (0:00:00.189) 0:00:16.732 ********** 2025-04-04 01:45:25.113549 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:25.115424 | orchestrator | 2025-04-04 01:45:25.116327 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-04 01:45:25.117682 | orchestrator | Friday 04 April 2025 01:45:25 +0000 (0:00:00.459) 0:00:17.191 ********** 2025-04-04 01:45:25.306400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:25.307794 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:25.308742 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:25.308846 | orchestrator | 2025-04-04 01:45:25.310553 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-04 01:45:25.311225 | orchestrator | Friday 04 April 2025 01:45:25 +0000 (0:00:00.190) 0:00:17.382 ********** 2025-04-04 01:45:25.482421 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:45:25.483332 | orchestrator | 2025-04-04 01:45:25.484021 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-04 01:45:25.484859 | orchestrator | Friday 04 April 2025 01:45:25 +0000 (0:00:00.176) 0:00:17.558 ********** 2025-04-04 01:45:25.664704 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:25.666233 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:25.667171 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:25.667969 | orchestrator | 2025-04-04 01:45:25.670527 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-04 01:45:25.899820 | orchestrator | Friday 04 April 2025 01:45:25 +0000 (0:00:00.183) 0:00:17.741 ********** 2025-04-04 01:45:25.899927 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:25.899996 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:25.900946 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:25.901629 | orchestrator | 2025-04-04 01:45:25.902445 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-04 01:45:25.902553 | orchestrator | Friday 04 April 2025 01:45:25 +0000 (0:00:00.234) 0:00:17.976 ********** 2025-04-04 01:45:26.101715 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:26.105864 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:26.106081 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:26.107081 | orchestrator | 2025-04-04 01:45:26.107806 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-04 01:45:26.108377 | orchestrator | Friday 04 April 2025 01:45:26 +0000 (0:00:00.199) 0:00:18.176 ********** 2025-04-04 01:45:26.269649 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:26.270077 | orchestrator | 2025-04-04 01:45:26.270786 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-04 01:45:26.272021 | orchestrator | Friday 04 April 2025 01:45:26 +0000 (0:00:00.168) 0:00:18.345 ********** 2025-04-04 01:45:26.437412 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:26.438585 | orchestrator | 2025-04-04 01:45:26.439392 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-04 01:45:26.440737 | orchestrator | Friday 04 April 2025 01:45:26 +0000 (0:00:00.167) 0:00:18.512 ********** 2025-04-04 01:45:26.606517 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:26.608568 | orchestrator | 2025-04-04 01:45:26.608709 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-04 01:45:26.761038 | orchestrator | Friday 04 April 2025 01:45:26 +0000 (0:00:00.171) 0:00:18.684 ********** 2025-04-04 01:45:26.761163 | orchestrator | ok: [testbed-node-3] => { 2025-04-04 01:45:26.761446 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-04 01:45:26.762645 | orchestrator | } 2025-04-04 01:45:26.763293 | orchestrator | 2025-04-04 01:45:26.763971 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-04 01:45:26.764721 | orchestrator | Friday 04 April 2025 01:45:26 +0000 (0:00:00.153) 0:00:18.837 ********** 2025-04-04 01:45:26.920642 | orchestrator | ok: [testbed-node-3] => { 2025-04-04 01:45:26.922587 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-04 01:45:26.922961 | orchestrator | } 2025-04-04 01:45:26.923525 | orchestrator | 2025-04-04 01:45:26.924383 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-04 01:45:26.924779 | orchestrator | Friday 04 April 2025 01:45:26 +0000 (0:00:00.159) 0:00:18.997 ********** 2025-04-04 01:45:27.361667 | orchestrator | ok: [testbed-node-3] => { 2025-04-04 01:45:27.361772 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-04 01:45:27.362617 | orchestrator | } 2025-04-04 01:45:27.364056 | orchestrator | 2025-04-04 01:45:27.364334 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-04 01:45:27.365561 | orchestrator | Friday 04 April 2025 01:45:27 +0000 (0:00:00.439) 0:00:19.437 ********** 2025-04-04 01:45:28.030932 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:45:28.031158 | orchestrator | 2025-04-04 01:45:28.032287 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-04 01:45:28.033330 | orchestrator | Friday 04 April 2025 01:45:28 +0000 (0:00:00.669) 0:00:20.107 ********** 2025-04-04 01:45:28.533779 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:45:28.536513 | orchestrator | 2025-04-04 01:45:28.537874 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-04 01:45:28.537913 | orchestrator | Friday 04 April 2025 01:45:28 +0000 (0:00:00.500) 0:00:20.607 ********** 2025-04-04 01:45:29.060197 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:45:29.061057 | orchestrator | 2025-04-04 01:45:29.062410 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-04 01:45:29.062886 | orchestrator | Friday 04 April 2025 01:45:29 +0000 (0:00:00.528) 0:00:21.136 ********** 2025-04-04 01:45:29.240741 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:45:29.241067 | orchestrator | 2025-04-04 01:45:29.242145 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-04 01:45:29.243026 | orchestrator | Friday 04 April 2025 01:45:29 +0000 (0:00:00.180) 0:00:21.317 ********** 2025-04-04 01:45:29.369530 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:29.372981 | orchestrator | 2025-04-04 01:45:29.374200 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-04 01:45:29.375472 | orchestrator | Friday 04 April 2025 01:45:29 +0000 (0:00:00.127) 0:00:21.445 ********** 2025-04-04 01:45:29.505520 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:29.506268 | orchestrator | 2025-04-04 01:45:29.507864 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-04 01:45:29.510521 | orchestrator | Friday 04 April 2025 01:45:29 +0000 (0:00:00.137) 0:00:21.582 ********** 2025-04-04 01:45:29.679739 | orchestrator | ok: [testbed-node-3] => { 2025-04-04 01:45:29.682418 | orchestrator |  "vgs_report": { 2025-04-04 01:45:29.683321 | orchestrator |  "vg": [] 2025-04-04 01:45:29.686327 | orchestrator |  } 2025-04-04 01:45:29.686438 | orchestrator | } 2025-04-04 01:45:29.686460 | orchestrator | 2025-04-04 01:45:29.686481 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-04 01:45:29.687449 | orchestrator | Friday 04 April 2025 01:45:29 +0000 (0:00:00.173) 0:00:21.756 ********** 2025-04-04 01:45:29.847723 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:29.851640 | orchestrator | 2025-04-04 01:45:29.860184 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-04 01:45:30.017880 | orchestrator | Friday 04 April 2025 01:45:29 +0000 (0:00:00.167) 0:00:21.924 ********** 2025-04-04 01:45:30.017925 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:30.018258 | orchestrator | 2025-04-04 01:45:30.018288 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-04 01:45:30.019202 | orchestrator | Friday 04 April 2025 01:45:30 +0000 (0:00:00.170) 0:00:22.095 ********** 2025-04-04 01:45:30.183662 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:30.184068 | orchestrator | 2025-04-04 01:45:30.185305 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-04 01:45:30.185800 | orchestrator | Friday 04 April 2025 01:45:30 +0000 (0:00:00.162) 0:00:22.258 ********** 2025-04-04 01:45:30.621514 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:30.622272 | orchestrator | 2025-04-04 01:45:30.622849 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-04 01:45:30.623795 | orchestrator | Friday 04 April 2025 01:45:30 +0000 (0:00:00.438) 0:00:22.696 ********** 2025-04-04 01:45:30.781989 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:30.964018 | orchestrator | 2025-04-04 01:45:30.964067 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-04 01:45:30.964083 | orchestrator | Friday 04 April 2025 01:45:30 +0000 (0:00:00.162) 0:00:22.858 ********** 2025-04-04 01:45:30.964105 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:30.965474 | orchestrator | 2025-04-04 01:45:30.965676 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-04 01:45:30.966943 | orchestrator | Friday 04 April 2025 01:45:30 +0000 (0:00:00.182) 0:00:23.041 ********** 2025-04-04 01:45:31.125377 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:31.126560 | orchestrator | 2025-04-04 01:45:31.127655 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-04 01:45:31.128968 | orchestrator | Friday 04 April 2025 01:45:31 +0000 (0:00:00.159) 0:00:23.200 ********** 2025-04-04 01:45:31.288942 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:31.290525 | orchestrator | 2025-04-04 01:45:31.293177 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-04 01:45:31.457084 | orchestrator | Friday 04 April 2025 01:45:31 +0000 (0:00:00.164) 0:00:23.365 ********** 2025-04-04 01:45:31.457255 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:31.457636 | orchestrator | 2025-04-04 01:45:31.458306 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-04 01:45:31.458668 | orchestrator | Friday 04 April 2025 01:45:31 +0000 (0:00:00.168) 0:00:23.533 ********** 2025-04-04 01:45:31.609786 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:31.611310 | orchestrator | 2025-04-04 01:45:31.612134 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-04 01:45:31.613035 | orchestrator | Friday 04 April 2025 01:45:31 +0000 (0:00:00.153) 0:00:23.687 ********** 2025-04-04 01:45:31.777192 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:31.777312 | orchestrator | 2025-04-04 01:45:31.777375 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-04 01:45:31.778373 | orchestrator | Friday 04 April 2025 01:45:31 +0000 (0:00:00.164) 0:00:23.852 ********** 2025-04-04 01:45:31.988541 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:31.989076 | orchestrator | 2025-04-04 01:45:31.989841 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-04 01:45:31.990817 | orchestrator | Friday 04 April 2025 01:45:31 +0000 (0:00:00.212) 0:00:24.064 ********** 2025-04-04 01:45:32.144372 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:32.145749 | orchestrator | 2025-04-04 01:45:32.146146 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-04 01:45:32.146754 | orchestrator | Friday 04 April 2025 01:45:32 +0000 (0:00:00.156) 0:00:24.221 ********** 2025-04-04 01:45:32.302863 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:32.303189 | orchestrator | 2025-04-04 01:45:32.303217 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-04 01:45:32.303237 | orchestrator | Friday 04 April 2025 01:45:32 +0000 (0:00:00.158) 0:00:24.379 ********** 2025-04-04 01:45:32.481243 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:32.482854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:32.482895 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:32.483299 | orchestrator | 2025-04-04 01:45:32.484014 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-04 01:45:32.485100 | orchestrator | Friday 04 April 2025 01:45:32 +0000 (0:00:00.178) 0:00:24.558 ********** 2025-04-04 01:45:32.957989 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:32.961078 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:32.962095 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:32.963178 | orchestrator | 2025-04-04 01:45:32.964438 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-04 01:45:32.965759 | orchestrator | Friday 04 April 2025 01:45:32 +0000 (0:00:00.476) 0:00:25.034 ********** 2025-04-04 01:45:33.150444 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:33.151394 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:33.152226 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:33.152807 | orchestrator | 2025-04-04 01:45:33.153631 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-04 01:45:33.154135 | orchestrator | Friday 04 April 2025 01:45:33 +0000 (0:00:00.193) 0:00:25.227 ********** 2025-04-04 01:45:33.346743 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:33.347917 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:33.350115 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:33.351189 | orchestrator | 2025-04-04 01:45:33.353695 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-04 01:45:33.354095 | orchestrator | Friday 04 April 2025 01:45:33 +0000 (0:00:00.196) 0:00:25.423 ********** 2025-04-04 01:45:33.531269 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:33.531425 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:33.531453 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:33.531887 | orchestrator | 2025-04-04 01:45:33.531920 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-04 01:45:33.532388 | orchestrator | Friday 04 April 2025 01:45:33 +0000 (0:00:00.184) 0:00:25.608 ********** 2025-04-04 01:45:33.751997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:33.752958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:33.753799 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:33.754528 | orchestrator | 2025-04-04 01:45:33.755333 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-04 01:45:33.756327 | orchestrator | Friday 04 April 2025 01:45:33 +0000 (0:00:00.221) 0:00:25.829 ********** 2025-04-04 01:45:33.949490 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:33.950442 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:33.951294 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:33.952362 | orchestrator | 2025-04-04 01:45:33.955008 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-04 01:45:34.176068 | orchestrator | Friday 04 April 2025 01:45:33 +0000 (0:00:00.197) 0:00:26.026 ********** 2025-04-04 01:45:34.176183 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:34.176955 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:34.177431 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:34.178387 | orchestrator | 2025-04-04 01:45:34.178671 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-04 01:45:34.179067 | orchestrator | Friday 04 April 2025 01:45:34 +0000 (0:00:00.225) 0:00:26.252 ********** 2025-04-04 01:45:34.783852 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:45:34.784185 | orchestrator | 2025-04-04 01:45:34.784229 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-04 01:45:34.784805 | orchestrator | Friday 04 April 2025 01:45:34 +0000 (0:00:00.607) 0:00:26.859 ********** 2025-04-04 01:45:35.501793 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:45:35.502809 | orchestrator | 2025-04-04 01:45:35.503940 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-04 01:45:35.504569 | orchestrator | Friday 04 April 2025 01:45:35 +0000 (0:00:00.716) 0:00:27.576 ********** 2025-04-04 01:45:35.695268 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:45:35.696299 | orchestrator | 2025-04-04 01:45:35.697170 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-04 01:45:35.698391 | orchestrator | Friday 04 April 2025 01:45:35 +0000 (0:00:00.196) 0:00:27.772 ********** 2025-04-04 01:45:36.183757 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'vg_name': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'}) 2025-04-04 01:45:36.184790 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'vg_name': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'}) 2025-04-04 01:45:36.185615 | orchestrator | 2025-04-04 01:45:36.186489 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-04 01:45:36.187834 | orchestrator | Friday 04 April 2025 01:45:36 +0000 (0:00:00.486) 0:00:28.259 ********** 2025-04-04 01:45:36.394323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:36.394693 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:36.395251 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:36.395653 | orchestrator | 2025-04-04 01:45:36.396275 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-04 01:45:36.397163 | orchestrator | Friday 04 April 2025 01:45:36 +0000 (0:00:00.211) 0:00:28.471 ********** 2025-04-04 01:45:36.606916 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:36.607575 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:36.608976 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:36.609608 | orchestrator | 2025-04-04 01:45:36.609619 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-04 01:45:36.609628 | orchestrator | Friday 04 April 2025 01:45:36 +0000 (0:00:00.212) 0:00:28.683 ********** 2025-04-04 01:45:36.824399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9f0483a6-c41e-5563-949a-aef1708b660a', 'data_vg': 'ceph-9f0483a6-c41e-5563-949a-aef1708b660a'})  2025-04-04 01:45:36.824526 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5', 'data_vg': 'ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5'})  2025-04-04 01:45:36.824732 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:45:36.825159 | orchestrator | 2025-04-04 01:45:36.825721 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-04 01:45:36.825902 | orchestrator | Friday 04 April 2025 01:45:36 +0000 (0:00:00.218) 0:00:28.902 ********** 2025-04-04 01:45:37.875924 | orchestrator | ok: [testbed-node-3] => { 2025-04-04 01:45:37.879104 | orchestrator |  "lvm_report": { 2025-04-04 01:45:37.882066 | orchestrator |  "lv": [ 2025-04-04 01:45:37.883475 | orchestrator |  { 2025-04-04 01:45:37.883497 | orchestrator |  "lv_name": "osd-block-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5", 2025-04-04 01:45:37.883974 | orchestrator |  "vg_name": "ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5" 2025-04-04 01:45:37.884743 | orchestrator |  }, 2025-04-04 01:45:37.885095 | orchestrator |  { 2025-04-04 01:45:37.885812 | orchestrator |  "lv_name": "osd-block-9f0483a6-c41e-5563-949a-aef1708b660a", 2025-04-04 01:45:37.885912 | orchestrator |  "vg_name": "ceph-9f0483a6-c41e-5563-949a-aef1708b660a" 2025-04-04 01:45:37.886497 | orchestrator |  } 2025-04-04 01:45:37.886834 | orchestrator |  ], 2025-04-04 01:45:37.887518 | orchestrator |  "pv": [ 2025-04-04 01:45:37.888534 | orchestrator |  { 2025-04-04 01:45:37.888773 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-04 01:45:37.888787 | orchestrator |  "vg_name": "ceph-9f0483a6-c41e-5563-949a-aef1708b660a" 2025-04-04 01:45:37.888799 | orchestrator |  }, 2025-04-04 01:45:37.889191 | orchestrator |  { 2025-04-04 01:45:37.889455 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-04 01:45:37.889693 | orchestrator |  "vg_name": "ceph-23d8bf4a-a5da-5ae2-b325-0a959eaad2e5" 2025-04-04 01:45:37.890011 | orchestrator |  } 2025-04-04 01:45:37.890418 | orchestrator |  ] 2025-04-04 01:45:37.890475 | orchestrator |  } 2025-04-04 01:45:37.891034 | orchestrator | } 2025-04-04 01:45:37.891251 | orchestrator | 2025-04-04 01:45:37.891267 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-04 01:45:37.891488 | orchestrator | 2025-04-04 01:45:37.891768 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-04 01:45:37.892019 | orchestrator | Friday 04 April 2025 01:45:37 +0000 (0:00:01.044) 0:00:29.946 ********** 2025-04-04 01:45:38.148582 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-04 01:45:38.149157 | orchestrator | 2025-04-04 01:45:38.149177 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-04 01:45:38.150332 | orchestrator | Friday 04 April 2025 01:45:38 +0000 (0:00:00.279) 0:00:30.226 ********** 2025-04-04 01:45:38.417892 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:45:38.419145 | orchestrator | 2025-04-04 01:45:38.957288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:38.957440 | orchestrator | Friday 04 April 2025 01:45:38 +0000 (0:00:00.269) 0:00:30.495 ********** 2025-04-04 01:45:38.957471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-04 01:45:38.957886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-04 01:45:38.958629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-04 01:45:38.959524 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-04 01:45:38.960618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-04 01:45:38.963492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-04 01:45:38.964894 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-04 01:45:38.965848 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-04 01:45:38.966709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-04 01:45:38.966922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-04 01:45:38.967557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-04 01:45:38.968510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-04 01:45:38.969606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-04 01:45:38.970385 | orchestrator | 2025-04-04 01:45:38.970905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:38.971572 | orchestrator | Friday 04 April 2025 01:45:38 +0000 (0:00:00.536) 0:00:31.032 ********** 2025-04-04 01:45:39.169809 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:39.169999 | orchestrator | 2025-04-04 01:45:39.170306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:39.170702 | orchestrator | Friday 04 April 2025 01:45:39 +0000 (0:00:00.215) 0:00:31.247 ********** 2025-04-04 01:45:39.423215 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:39.423834 | orchestrator | 2025-04-04 01:45:39.424132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:39.425290 | orchestrator | Friday 04 April 2025 01:45:39 +0000 (0:00:00.253) 0:00:31.501 ********** 2025-04-04 01:45:39.621160 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:39.621924 | orchestrator | 2025-04-04 01:45:39.622639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:39.622674 | orchestrator | Friday 04 April 2025 01:45:39 +0000 (0:00:00.196) 0:00:31.697 ********** 2025-04-04 01:45:39.889057 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:39.889605 | orchestrator | 2025-04-04 01:45:39.890187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:39.891013 | orchestrator | Friday 04 April 2025 01:45:39 +0000 (0:00:00.268) 0:00:31.966 ********** 2025-04-04 01:45:40.104009 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:40.105644 | orchestrator | 2025-04-04 01:45:40.106802 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:40.343960 | orchestrator | Friday 04 April 2025 01:45:40 +0000 (0:00:00.215) 0:00:32.182 ********** 2025-04-04 01:45:40.344064 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:40.344124 | orchestrator | 2025-04-04 01:45:40.345126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:40.345520 | orchestrator | Friday 04 April 2025 01:45:40 +0000 (0:00:00.239) 0:00:32.421 ********** 2025-04-04 01:45:40.958133 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:40.960460 | orchestrator | 2025-04-04 01:45:40.961546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:40.963629 | orchestrator | Friday 04 April 2025 01:45:40 +0000 (0:00:00.612) 0:00:33.033 ********** 2025-04-04 01:45:41.192869 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:41.193516 | orchestrator | 2025-04-04 01:45:41.194138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:41.195497 | orchestrator | Friday 04 April 2025 01:45:41 +0000 (0:00:00.236) 0:00:33.269 ********** 2025-04-04 01:45:41.699513 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3aa6ea8b-0579-4a55-b42a-d9feea6f29a9) 2025-04-04 01:45:41.699760 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3aa6ea8b-0579-4a55-b42a-d9feea6f29a9) 2025-04-04 01:45:41.700969 | orchestrator | 2025-04-04 01:45:41.701549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:41.702629 | orchestrator | Friday 04 April 2025 01:45:41 +0000 (0:00:00.505) 0:00:33.775 ********** 2025-04-04 01:45:42.195242 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d959f59d-34fb-41af-b696-545de6cad1c5) 2025-04-04 01:45:42.195481 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d959f59d-34fb-41af-b696-545de6cad1c5) 2025-04-04 01:45:42.196634 | orchestrator | 2025-04-04 01:45:42.198819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:42.202373 | orchestrator | Friday 04 April 2025 01:45:42 +0000 (0:00:00.496) 0:00:34.271 ********** 2025-04-04 01:45:42.709449 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b8c398ed-0e41-4b9f-9814-6176b4164583) 2025-04-04 01:45:42.710634 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b8c398ed-0e41-4b9f-9814-6176b4164583) 2025-04-04 01:45:42.711497 | orchestrator | 2025-04-04 01:45:42.711534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:42.712664 | orchestrator | Friday 04 April 2025 01:45:42 +0000 (0:00:00.514) 0:00:34.786 ********** 2025-04-04 01:45:43.243685 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3c72412b-1c42-4489-963e-1990a8f04f17) 2025-04-04 01:45:43.243840 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3c72412b-1c42-4489-963e-1990a8f04f17) 2025-04-04 01:45:43.243866 | orchestrator | 2025-04-04 01:45:43.243993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:45:43.244282 | orchestrator | Friday 04 April 2025 01:45:43 +0000 (0:00:00.532) 0:00:35.319 ********** 2025-04-04 01:45:43.625969 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-04 01:45:43.626216 | orchestrator | 2025-04-04 01:45:43.626861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:43.627392 | orchestrator | Friday 04 April 2025 01:45:43 +0000 (0:00:00.383) 0:00:35.702 ********** 2025-04-04 01:45:44.188923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-04 01:45:44.189963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-04 01:45:44.190885 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-04 01:45:44.192273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-04 01:45:44.193414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-04 01:45:44.194414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-04 01:45:44.195775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-04 01:45:44.197378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-04 01:45:44.198943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-04 01:45:44.200398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-04 01:45:44.201384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-04 01:45:44.202431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-04 01:45:44.203037 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-04 01:45:44.203675 | orchestrator | 2025-04-04 01:45:44.206518 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:44.208720 | orchestrator | Friday 04 April 2025 01:45:44 +0000 (0:00:00.560) 0:00:36.263 ********** 2025-04-04 01:45:44.679747 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:44.682123 | orchestrator | 2025-04-04 01:45:44.682992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:44.684123 | orchestrator | Friday 04 April 2025 01:45:44 +0000 (0:00:00.488) 0:00:36.752 ********** 2025-04-04 01:45:44.896950 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:44.899685 | orchestrator | 2025-04-04 01:45:44.900534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:44.902323 | orchestrator | Friday 04 April 2025 01:45:44 +0000 (0:00:00.221) 0:00:36.973 ********** 2025-04-04 01:45:45.139518 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:45.141169 | orchestrator | 2025-04-04 01:45:45.141797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:45.143180 | orchestrator | Friday 04 April 2025 01:45:45 +0000 (0:00:00.241) 0:00:37.215 ********** 2025-04-04 01:45:45.363280 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:45.364018 | orchestrator | 2025-04-04 01:45:45.365640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:45.366637 | orchestrator | Friday 04 April 2025 01:45:45 +0000 (0:00:00.224) 0:00:37.440 ********** 2025-04-04 01:45:45.583089 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:45.584123 | orchestrator | 2025-04-04 01:45:45.584817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:45.585936 | orchestrator | Friday 04 April 2025 01:45:45 +0000 (0:00:00.218) 0:00:37.659 ********** 2025-04-04 01:45:45.817930 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:45.819580 | orchestrator | 2025-04-04 01:45:45.821551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:45.822119 | orchestrator | Friday 04 April 2025 01:45:45 +0000 (0:00:00.231) 0:00:37.891 ********** 2025-04-04 01:45:46.055667 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:46.056096 | orchestrator | 2025-04-04 01:45:46.056133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:46.056155 | orchestrator | Friday 04 April 2025 01:45:46 +0000 (0:00:00.235) 0:00:38.126 ********** 2025-04-04 01:45:46.267852 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:46.268936 | orchestrator | 2025-04-04 01:45:46.269617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:46.270633 | orchestrator | Friday 04 April 2025 01:45:46 +0000 (0:00:00.216) 0:00:38.343 ********** 2025-04-04 01:45:47.260164 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-04 01:45:47.261389 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-04 01:45:47.262594 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-04 01:45:47.266622 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-04 01:45:47.267804 | orchestrator | 2025-04-04 01:45:47.268689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:47.269881 | orchestrator | Friday 04 April 2025 01:45:47 +0000 (0:00:00.993) 0:00:39.336 ********** 2025-04-04 01:45:47.507075 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:47.508020 | orchestrator | 2025-04-04 01:45:47.509208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:47.513263 | orchestrator | Friday 04 April 2025 01:45:47 +0000 (0:00:00.246) 0:00:39.583 ********** 2025-04-04 01:45:47.744202 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:47.745197 | orchestrator | 2025-04-04 01:45:47.745560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:47.746393 | orchestrator | Friday 04 April 2025 01:45:47 +0000 (0:00:00.230) 0:00:39.813 ********** 2025-04-04 01:45:48.593867 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:48.595602 | orchestrator | 2025-04-04 01:45:48.598654 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:45:48.864292 | orchestrator | Friday 04 April 2025 01:45:48 +0000 (0:00:00.856) 0:00:40.670 ********** 2025-04-04 01:45:48.864416 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:48.865986 | orchestrator | 2025-04-04 01:45:48.867145 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-04 01:45:48.868297 | orchestrator | Friday 04 April 2025 01:45:48 +0000 (0:00:00.268) 0:00:40.939 ********** 2025-04-04 01:45:49.045509 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:49.045723 | orchestrator | 2025-04-04 01:45:49.047049 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-04 01:45:49.047685 | orchestrator | Friday 04 April 2025 01:45:49 +0000 (0:00:00.181) 0:00:41.120 ********** 2025-04-04 01:45:49.319625 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8a501af7-3b2a-532c-a25d-8e5c367a167f'}}) 2025-04-04 01:45:49.320846 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'}}) 2025-04-04 01:45:49.321980 | orchestrator | 2025-04-04 01:45:49.325646 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-04 01:45:51.323855 | orchestrator | Friday 04 April 2025 01:45:49 +0000 (0:00:00.275) 0:00:41.396 ********** 2025-04-04 01:45:51.324006 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'}) 2025-04-04 01:45:51.324454 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'}) 2025-04-04 01:45:51.324484 | orchestrator | 2025-04-04 01:45:51.324506 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-04 01:45:51.325478 | orchestrator | Friday 04 April 2025 01:45:51 +0000 (0:00:02.003) 0:00:43.399 ********** 2025-04-04 01:45:51.518166 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:45:51.519500 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:45:51.520766 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:51.524521 | orchestrator | 2025-04-04 01:45:52.772870 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-04 01:45:52.772996 | orchestrator | Friday 04 April 2025 01:45:51 +0000 (0:00:00.194) 0:00:43.594 ********** 2025-04-04 01:45:52.773030 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'}) 2025-04-04 01:45:52.774258 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'}) 2025-04-04 01:45:52.777497 | orchestrator | 2025-04-04 01:45:52.953513 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-04 01:45:52.953550 | orchestrator | Friday 04 April 2025 01:45:52 +0000 (0:00:01.253) 0:00:44.848 ********** 2025-04-04 01:45:52.953573 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:45:52.953848 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:45:52.953875 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:52.953898 | orchestrator | 2025-04-04 01:45:52.955685 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-04 01:45:52.956226 | orchestrator | Friday 04 April 2025 01:45:52 +0000 (0:00:00.182) 0:00:45.030 ********** 2025-04-04 01:45:53.111680 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:53.113843 | orchestrator | 2025-04-04 01:45:53.118583 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-04 01:45:53.558143 | orchestrator | Friday 04 April 2025 01:45:53 +0000 (0:00:00.157) 0:00:45.188 ********** 2025-04-04 01:45:53.558218 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:45:53.559570 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:45:53.561154 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:53.563691 | orchestrator | 2025-04-04 01:45:53.566259 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-04 01:45:53.566293 | orchestrator | Friday 04 April 2025 01:45:53 +0000 (0:00:00.446) 0:00:45.634 ********** 2025-04-04 01:45:53.737854 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:53.738679 | orchestrator | 2025-04-04 01:45:53.739616 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-04 01:45:53.740301 | orchestrator | Friday 04 April 2025 01:45:53 +0000 (0:00:00.180) 0:00:45.815 ********** 2025-04-04 01:45:53.918492 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:45:53.919010 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:45:53.919474 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:53.936834 | orchestrator | 2025-04-04 01:45:54.092101 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-04 01:45:54.092171 | orchestrator | Friday 04 April 2025 01:45:53 +0000 (0:00:00.180) 0:00:45.996 ********** 2025-04-04 01:45:54.092197 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:54.092610 | orchestrator | 2025-04-04 01:45:54.093563 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-04 01:45:54.095107 | orchestrator | Friday 04 April 2025 01:45:54 +0000 (0:00:00.172) 0:00:46.168 ********** 2025-04-04 01:45:54.294921 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:45:54.295479 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:45:54.296389 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:54.297467 | orchestrator | 2025-04-04 01:45:54.297510 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-04 01:45:54.486572 | orchestrator | Friday 04 April 2025 01:45:54 +0000 (0:00:00.204) 0:00:46.373 ********** 2025-04-04 01:45:54.486643 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:45:54.487871 | orchestrator | 2025-04-04 01:45:54.488872 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-04 01:45:54.492494 | orchestrator | Friday 04 April 2025 01:45:54 +0000 (0:00:00.188) 0:00:46.562 ********** 2025-04-04 01:45:54.698167 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:45:54.703198 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:45:54.703247 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:54.704118 | orchestrator | 2025-04-04 01:45:54.705177 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-04 01:45:54.705810 | orchestrator | Friday 04 April 2025 01:45:54 +0000 (0:00:00.211) 0:00:46.773 ********** 2025-04-04 01:45:54.912506 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:45:54.913106 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:45:54.913148 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:54.913185 | orchestrator | 2025-04-04 01:45:54.913971 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-04 01:45:54.917423 | orchestrator | Friday 04 April 2025 01:45:54 +0000 (0:00:00.214) 0:00:46.988 ********** 2025-04-04 01:45:55.094282 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:45:55.094899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:45:55.096168 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:55.099520 | orchestrator | 2025-04-04 01:45:55.274710 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-04 01:45:55.274816 | orchestrator | Friday 04 April 2025 01:45:55 +0000 (0:00:00.180) 0:00:47.168 ********** 2025-04-04 01:45:55.274866 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:55.274955 | orchestrator | 2025-04-04 01:45:55.275645 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-04 01:45:55.276063 | orchestrator | Friday 04 April 2025 01:45:55 +0000 (0:00:00.183) 0:00:47.352 ********** 2025-04-04 01:45:55.454763 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:55.455613 | orchestrator | 2025-04-04 01:45:55.457186 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-04 01:45:55.458633 | orchestrator | Friday 04 April 2025 01:45:55 +0000 (0:00:00.179) 0:00:47.531 ********** 2025-04-04 01:45:55.930520 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:55.931505 | orchestrator | 2025-04-04 01:45:55.939121 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-04 01:45:55.947771 | orchestrator | Friday 04 April 2025 01:45:55 +0000 (0:00:00.472) 0:00:48.004 ********** 2025-04-04 01:45:56.140554 | orchestrator | ok: [testbed-node-4] => { 2025-04-04 01:45:56.297485 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-04 01:45:56.297582 | orchestrator | } 2025-04-04 01:45:56.297598 | orchestrator | 2025-04-04 01:45:56.297614 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-04 01:45:56.297630 | orchestrator | Friday 04 April 2025 01:45:56 +0000 (0:00:00.201) 0:00:48.206 ********** 2025-04-04 01:45:56.297659 | orchestrator | ok: [testbed-node-4] => { 2025-04-04 01:45:56.301375 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-04 01:45:56.305425 | orchestrator | } 2025-04-04 01:45:56.305460 | orchestrator | 2025-04-04 01:45:56.307727 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-04 01:45:56.307760 | orchestrator | Friday 04 April 2025 01:45:56 +0000 (0:00:00.169) 0:00:48.375 ********** 2025-04-04 01:45:56.456534 | orchestrator | ok: [testbed-node-4] => { 2025-04-04 01:45:56.456638 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-04 01:45:56.456657 | orchestrator | } 2025-04-04 01:45:56.456672 | orchestrator | 2025-04-04 01:45:56.456691 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-04 01:45:56.459716 | orchestrator | Friday 04 April 2025 01:45:56 +0000 (0:00:00.152) 0:00:48.528 ********** 2025-04-04 01:45:56.977913 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:45:56.979164 | orchestrator | 2025-04-04 01:45:56.980582 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-04 01:45:56.981233 | orchestrator | Friday 04 April 2025 01:45:56 +0000 (0:00:00.526) 0:00:49.054 ********** 2025-04-04 01:45:57.601679 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:45:57.601861 | orchestrator | 2025-04-04 01:45:57.602750 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-04 01:45:57.603746 | orchestrator | Friday 04 April 2025 01:45:57 +0000 (0:00:00.622) 0:00:49.677 ********** 2025-04-04 01:45:58.152652 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:45:58.152835 | orchestrator | 2025-04-04 01:45:58.154302 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-04 01:45:58.155430 | orchestrator | Friday 04 April 2025 01:45:58 +0000 (0:00:00.549) 0:00:50.227 ********** 2025-04-04 01:45:58.327445 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:45:58.328684 | orchestrator | 2025-04-04 01:45:58.329142 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-04 01:45:58.331264 | orchestrator | Friday 04 April 2025 01:45:58 +0000 (0:00:00.175) 0:00:50.402 ********** 2025-04-04 01:45:58.479724 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:58.482004 | orchestrator | 2025-04-04 01:45:58.482825 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-04 01:45:58.483779 | orchestrator | Friday 04 April 2025 01:45:58 +0000 (0:00:00.152) 0:00:50.555 ********** 2025-04-04 01:45:58.610276 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:58.613549 | orchestrator | 2025-04-04 01:45:58.616729 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-04 01:45:58.808814 | orchestrator | Friday 04 April 2025 01:45:58 +0000 (0:00:00.131) 0:00:50.686 ********** 2025-04-04 01:45:58.808929 | orchestrator | ok: [testbed-node-4] => { 2025-04-04 01:45:58.813302 | orchestrator |  "vgs_report": { 2025-04-04 01:45:58.814209 | orchestrator |  "vg": [] 2025-04-04 01:45:58.814707 | orchestrator |  } 2025-04-04 01:45:58.815981 | orchestrator | } 2025-04-04 01:45:58.816495 | orchestrator | 2025-04-04 01:45:58.816526 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-04 01:45:59.260780 | orchestrator | Friday 04 April 2025 01:45:58 +0000 (0:00:00.197) 0:00:50.883 ********** 2025-04-04 01:45:59.260903 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:59.262299 | orchestrator | 2025-04-04 01:45:59.263295 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-04 01:45:59.265762 | orchestrator | Friday 04 April 2025 01:45:59 +0000 (0:00:00.452) 0:00:51.336 ********** 2025-04-04 01:45:59.428383 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:59.428681 | orchestrator | 2025-04-04 01:45:59.430330 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-04 01:45:59.431101 | orchestrator | Friday 04 April 2025 01:45:59 +0000 (0:00:00.168) 0:00:51.504 ********** 2025-04-04 01:45:59.590015 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:59.590491 | orchestrator | 2025-04-04 01:45:59.591480 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-04 01:45:59.591557 | orchestrator | Friday 04 April 2025 01:45:59 +0000 (0:00:00.161) 0:00:51.666 ********** 2025-04-04 01:45:59.755437 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:59.756405 | orchestrator | 2025-04-04 01:45:59.757863 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-04 01:45:59.760946 | orchestrator | Friday 04 April 2025 01:45:59 +0000 (0:00:00.165) 0:00:51.832 ********** 2025-04-04 01:45:59.921086 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:45:59.921223 | orchestrator | 2025-04-04 01:45:59.922455 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-04 01:45:59.922545 | orchestrator | Friday 04 April 2025 01:45:59 +0000 (0:00:00.166) 0:00:51.999 ********** 2025-04-04 01:46:00.070879 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:00.071485 | orchestrator | 2025-04-04 01:46:00.072665 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-04 01:46:00.077622 | orchestrator | Friday 04 April 2025 01:46:00 +0000 (0:00:00.148) 0:00:52.147 ********** 2025-04-04 01:46:00.243836 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:00.244899 | orchestrator | 2025-04-04 01:46:00.248407 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-04 01:46:00.249846 | orchestrator | Friday 04 April 2025 01:46:00 +0000 (0:00:00.172) 0:00:52.319 ********** 2025-04-04 01:46:00.413681 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:00.414563 | orchestrator | 2025-04-04 01:46:00.415599 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-04 01:46:00.416410 | orchestrator | Friday 04 April 2025 01:46:00 +0000 (0:00:00.171) 0:00:52.490 ********** 2025-04-04 01:46:00.584100 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:00.584429 | orchestrator | 2025-04-04 01:46:00.585821 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-04 01:46:00.585859 | orchestrator | Friday 04 April 2025 01:46:00 +0000 (0:00:00.171) 0:00:52.662 ********** 2025-04-04 01:46:00.757137 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:00.757507 | orchestrator | 2025-04-04 01:46:00.760867 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-04 01:46:00.761144 | orchestrator | Friday 04 April 2025 01:46:00 +0000 (0:00:00.171) 0:00:52.833 ********** 2025-04-04 01:46:00.925982 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:00.927370 | orchestrator | 2025-04-04 01:46:00.929535 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-04 01:46:00.931444 | orchestrator | Friday 04 April 2025 01:46:00 +0000 (0:00:00.168) 0:00:53.001 ********** 2025-04-04 01:46:01.075823 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:01.075988 | orchestrator | 2025-04-04 01:46:01.077008 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-04 01:46:01.078281 | orchestrator | Friday 04 April 2025 01:46:01 +0000 (0:00:00.151) 0:00:53.152 ********** 2025-04-04 01:46:01.548441 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:01.549562 | orchestrator | 2025-04-04 01:46:01.550458 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-04 01:46:01.552238 | orchestrator | Friday 04 April 2025 01:46:01 +0000 (0:00:00.467) 0:00:53.620 ********** 2025-04-04 01:46:01.711870 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:01.712599 | orchestrator | 2025-04-04 01:46:01.714313 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-04 01:46:01.715194 | orchestrator | Friday 04 April 2025 01:46:01 +0000 (0:00:00.168) 0:00:53.789 ********** 2025-04-04 01:46:01.905744 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:46:01.906106 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:46:01.906973 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:01.907234 | orchestrator | 2025-04-04 01:46:01.908264 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-04 01:46:01.908978 | orchestrator | Friday 04 April 2025 01:46:01 +0000 (0:00:00.194) 0:00:53.983 ********** 2025-04-04 01:46:02.124674 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:46:02.124890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:46:02.126100 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:02.126817 | orchestrator | 2025-04-04 01:46:02.127588 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-04 01:46:02.128508 | orchestrator | Friday 04 April 2025 01:46:02 +0000 (0:00:00.218) 0:00:54.201 ********** 2025-04-04 01:46:02.343653 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:46:02.344496 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:46:02.346137 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:02.347422 | orchestrator | 2025-04-04 01:46:02.348433 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-04 01:46:02.349589 | orchestrator | Friday 04 April 2025 01:46:02 +0000 (0:00:00.216) 0:00:54.418 ********** 2025-04-04 01:46:02.541902 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:46:02.542588 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:46:02.542636 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:02.542716 | orchestrator | 2025-04-04 01:46:02.542785 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-04 01:46:02.543040 | orchestrator | Friday 04 April 2025 01:46:02 +0000 (0:00:00.200) 0:00:54.619 ********** 2025-04-04 01:46:02.736562 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:46:02.736735 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:46:02.736862 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:02.738323 | orchestrator | 2025-04-04 01:46:02.738867 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-04 01:46:02.741468 | orchestrator | Friday 04 April 2025 01:46:02 +0000 (0:00:00.194) 0:00:54.813 ********** 2025-04-04 01:46:02.912775 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:46:02.913762 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:46:02.914431 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:02.915164 | orchestrator | 2025-04-04 01:46:02.916210 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-04 01:46:02.917496 | orchestrator | Friday 04 April 2025 01:46:02 +0000 (0:00:00.176) 0:00:54.990 ********** 2025-04-04 01:46:03.108457 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:46:03.109296 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:46:03.110557 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:03.112402 | orchestrator | 2025-04-04 01:46:03.113065 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-04 01:46:03.114523 | orchestrator | Friday 04 April 2025 01:46:03 +0000 (0:00:00.194) 0:00:55.184 ********** 2025-04-04 01:46:03.298224 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:46:03.298480 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:46:03.302083 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:03.302549 | orchestrator | 2025-04-04 01:46:03.302651 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-04 01:46:03.302684 | orchestrator | Friday 04 April 2025 01:46:03 +0000 (0:00:00.189) 0:00:55.374 ********** 2025-04-04 01:46:03.803623 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:46:03.803819 | orchestrator | 2025-04-04 01:46:03.804332 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-04 01:46:03.804603 | orchestrator | Friday 04 April 2025 01:46:03 +0000 (0:00:00.506) 0:00:55.880 ********** 2025-04-04 01:46:04.340581 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:46:04.341676 | orchestrator | 2025-04-04 01:46:04.344620 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-04 01:46:04.345775 | orchestrator | Friday 04 April 2025 01:46:04 +0000 (0:00:00.536) 0:00:56.417 ********** 2025-04-04 01:46:04.501057 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:46:04.502005 | orchestrator | 2025-04-04 01:46:04.502591 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-04 01:46:04.502911 | orchestrator | Friday 04 April 2025 01:46:04 +0000 (0:00:00.161) 0:00:56.578 ********** 2025-04-04 01:46:04.748567 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'vg_name': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'}) 2025-04-04 01:46:04.751206 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'vg_name': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'}) 2025-04-04 01:46:04.751251 | orchestrator | 2025-04-04 01:46:04.752812 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-04 01:46:04.753731 | orchestrator | Friday 04 April 2025 01:46:04 +0000 (0:00:00.246) 0:00:56.824 ********** 2025-04-04 01:46:04.933642 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:46:04.934393 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:46:04.935057 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:04.935643 | orchestrator | 2025-04-04 01:46:04.939887 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-04 01:46:05.118986 | orchestrator | Friday 04 April 2025 01:46:04 +0000 (0:00:00.186) 0:00:57.010 ********** 2025-04-04 01:46:05.119037 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:46:05.120324 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:46:05.121790 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:05.125624 | orchestrator | 2025-04-04 01:46:05.131835 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-04 01:46:05.316249 | orchestrator | Friday 04 April 2025 01:46:05 +0000 (0:00:00.185) 0:00:57.196 ********** 2025-04-04 01:46:05.316327 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f', 'data_vg': 'ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f'})  2025-04-04 01:46:05.319126 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824', 'data_vg': 'ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824'})  2025-04-04 01:46:05.320474 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:05.320847 | orchestrator | 2025-04-04 01:46:05.322667 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-04 01:46:05.323110 | orchestrator | Friday 04 April 2025 01:46:05 +0000 (0:00:00.194) 0:00:57.390 ********** 2025-04-04 01:46:06.520043 | orchestrator | ok: [testbed-node-4] => { 2025-04-04 01:46:06.521948 | orchestrator |  "lvm_report": { 2025-04-04 01:46:06.521986 | orchestrator |  "lv": [ 2025-04-04 01:46:06.522001 | orchestrator |  { 2025-04-04 01:46:06.522094 | orchestrator |  "lv_name": "osd-block-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824", 2025-04-04 01:46:06.522123 | orchestrator |  "vg_name": "ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824" 2025-04-04 01:46:06.522739 | orchestrator |  }, 2025-04-04 01:46:06.522772 | orchestrator |  { 2025-04-04 01:46:06.525431 | orchestrator |  "lv_name": "osd-block-8a501af7-3b2a-532c-a25d-8e5c367a167f", 2025-04-04 01:46:06.527336 | orchestrator |  "vg_name": "ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f" 2025-04-04 01:46:06.527400 | orchestrator |  } 2025-04-04 01:46:06.527416 | orchestrator |  ], 2025-04-04 01:46:06.527430 | orchestrator |  "pv": [ 2025-04-04 01:46:06.527444 | orchestrator |  { 2025-04-04 01:46:06.527464 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-04 01:46:06.530133 | orchestrator |  "vg_name": "ceph-8a501af7-3b2a-532c-a25d-8e5c367a167f" 2025-04-04 01:46:06.530160 | orchestrator |  }, 2025-04-04 01:46:06.530174 | orchestrator |  { 2025-04-04 01:46:06.530189 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-04 01:46:06.530239 | orchestrator |  "vg_name": "ceph-0aaa6bd4-fef5-5601-ad5e-b9ddf526c824" 2025-04-04 01:46:06.530299 | orchestrator |  } 2025-04-04 01:46:06.530320 | orchestrator |  ] 2025-04-04 01:46:06.841179 | orchestrator |  } 2025-04-04 01:46:06.841303 | orchestrator | } 2025-04-04 01:46:06.841322 | orchestrator | 2025-04-04 01:46:06.841389 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-04 01:46:06.841408 | orchestrator | 2025-04-04 01:46:06.841423 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-04 01:46:06.841438 | orchestrator | Friday 04 April 2025 01:46:06 +0000 (0:00:01.199) 0:00:58.590 ********** 2025-04-04 01:46:06.841470 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-04 01:46:07.110444 | orchestrator | 2025-04-04 01:46:07.110492 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-04 01:46:07.110508 | orchestrator | Friday 04 April 2025 01:46:06 +0000 (0:00:00.326) 0:00:58.916 ********** 2025-04-04 01:46:07.110560 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:46:07.112259 | orchestrator | 2025-04-04 01:46:07.113527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:46:07.117781 | orchestrator | Friday 04 April 2025 01:46:07 +0000 (0:00:00.270) 0:00:59.187 ********** 2025-04-04 01:46:07.731457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-04 01:46:07.731715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-04 01:46:07.732597 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-04 01:46:07.732630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-04 01:46:07.733410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-04 01:46:07.733959 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-04 01:46:07.734737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-04 01:46:07.735143 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-04 01:46:07.735616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-04 01:46:07.736299 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-04 01:46:07.736633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-04 01:46:07.737323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-04 01:46:07.738651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-04 01:46:07.738966 | orchestrator | 2025-04-04 01:46:07.740113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:46:07.740458 | orchestrator | Friday 04 April 2025 01:46:07 +0000 (0:00:00.614) 0:00:59.801 ********** 2025-04-04 01:46:07.951045 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:07.952184 | orchestrator | 2025-04-04 01:46:07.952219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:46:07.953031 | orchestrator | Friday 04 April 2025 01:46:07 +0000 (0:00:00.226) 0:01:00.027 ********** 2025-04-04 01:46:08.200695 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:08.202525 | orchestrator | 2025-04-04 01:46:08.204528 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:46:08.204594 | orchestrator | Friday 04 April 2025 01:46:08 +0000 (0:00:00.246) 0:01:00.274 ********** 2025-04-04 01:46:08.440146 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:08.440429 | orchestrator | 2025-04-04 01:46:08.441622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:46:08.443556 | orchestrator | Friday 04 April 2025 01:46:08 +0000 (0:00:00.241) 0:01:00.515 ********** 2025-04-04 01:46:09.199645 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:09.199785 | orchestrator | 2025-04-04 01:46:09.199982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:46:09.200463 | orchestrator | Friday 04 April 2025 01:46:09 +0000 (0:00:00.761) 0:01:01.277 ********** 2025-04-04 01:46:09.428832 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:09.429879 | orchestrator | 2025-04-04 01:46:09.433087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:46:09.433159 | orchestrator | Friday 04 April 2025 01:46:09 +0000 (0:00:00.226) 0:01:01.504 ********** 2025-04-04 01:46:09.642270 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:09.642708 | orchestrator | 2025-04-04 01:46:09.642739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:46:09.642761 | orchestrator | Friday 04 April 2025 01:46:09 +0000 (0:00:00.215) 0:01:01.719 ********** 2025-04-04 01:46:09.862481 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:09.863196 | orchestrator | 2025-04-04 01:46:09.863230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:46:10.126331 | orchestrator | Friday 04 April 2025 01:46:09 +0000 (0:00:00.220) 0:01:01.940 ********** 2025-04-04 01:46:10.126410 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:10.127574 | orchestrator | 2025-04-04 01:46:10.128476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:46:10.129843 | orchestrator | Friday 04 April 2025 01:46:10 +0000 (0:00:00.263) 0:01:02.203 ********** 2025-04-04 01:46:10.613445 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cf831ac4-ec72-4e5f-9ce6-19359424a886) 2025-04-04 01:46:10.614329 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cf831ac4-ec72-4e5f-9ce6-19359424a886) 2025-04-04 01:46:10.614381 | orchestrator | 2025-04-04 01:46:10.614402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:46:10.614726 | orchestrator | Friday 04 April 2025 01:46:10 +0000 (0:00:00.485) 0:01:02.689 ********** 2025-04-04 01:46:11.138785 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_40370202-6a2d-4119-85f5-057a26d35c03) 2025-04-04 01:46:11.139156 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_40370202-6a2d-4119-85f5-057a26d35c03) 2025-04-04 01:46:11.139971 | orchestrator | 2025-04-04 01:46:11.140656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:46:11.141983 | orchestrator | Friday 04 April 2025 01:46:11 +0000 (0:00:00.527) 0:01:03.216 ********** 2025-04-04 01:46:11.632469 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9bee64ad-b5e3-4230-9d8c-a8a301110b73) 2025-04-04 01:46:11.632641 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9bee64ad-b5e3-4230-9d8c-a8a301110b73) 2025-04-04 01:46:11.633775 | orchestrator | 2025-04-04 01:46:11.634553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:46:11.634591 | orchestrator | Friday 04 April 2025 01:46:11 +0000 (0:00:00.490) 0:01:03.707 ********** 2025-04-04 01:46:12.407120 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_07b46a09-8c2f-4f65-be68-ae4e772e446d) 2025-04-04 01:46:12.407662 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_07b46a09-8c2f-4f65-be68-ae4e772e446d) 2025-04-04 01:46:12.408588 | orchestrator | 2025-04-04 01:46:12.409467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-04 01:46:12.412052 | orchestrator | Friday 04 April 2025 01:46:12 +0000 (0:00:00.775) 0:01:04.482 ********** 2025-04-04 01:46:13.394045 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-04 01:46:13.394437 | orchestrator | 2025-04-04 01:46:13.394853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:46:13.397760 | orchestrator | Friday 04 April 2025 01:46:13 +0000 (0:00:00.985) 0:01:05.468 ********** 2025-04-04 01:46:13.970233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-04 01:46:13.972307 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-04 01:46:13.975367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-04 01:46:13.975724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-04 01:46:13.975759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-04 01:46:13.977239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-04 01:46:13.978373 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-04 01:46:13.979856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-04 01:46:13.983795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-04 01:46:13.984582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-04 01:46:13.985333 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-04 01:46:13.985990 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-04 01:46:13.989984 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-04 01:46:13.992388 | orchestrator | 2025-04-04 01:46:13.993997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:46:13.994078 | orchestrator | Friday 04 April 2025 01:46:13 +0000 (0:00:00.578) 0:01:06.047 ********** 2025-04-04 01:46:14.202204 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:14.202745 | orchestrator | 2025-04-04 01:46:14.203689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:46:14.203721 | orchestrator | Friday 04 April 2025 01:46:14 +0000 (0:00:00.231) 0:01:06.279 ********** 2025-04-04 01:46:14.428529 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:14.430188 | orchestrator | 2025-04-04 01:46:14.431482 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:46:14.432213 | orchestrator | Friday 04 April 2025 01:46:14 +0000 (0:00:00.225) 0:01:06.504 ********** 2025-04-04 01:46:14.649644 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:14.650600 | orchestrator | 2025-04-04 01:46:14.652785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:46:14.654322 | orchestrator | Friday 04 April 2025 01:46:14 +0000 (0:00:00.221) 0:01:06.726 ********** 2025-04-04 01:46:14.894006 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:14.896763 | orchestrator | 2025-04-04 01:46:14.896801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:46:14.897706 | orchestrator | Friday 04 April 2025 01:46:14 +0000 (0:00:00.242) 0:01:06.968 ********** 2025-04-04 01:46:15.130801 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:15.131750 | orchestrator | 2025-04-04 01:46:15.132542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:46:15.133183 | orchestrator | Friday 04 April 2025 01:46:15 +0000 (0:00:00.239) 0:01:07.208 ********** 2025-04-04 01:46:15.350780 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:15.352115 | orchestrator | 2025-04-04 01:46:15.352260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:46:15.353283 | orchestrator | Friday 04 April 2025 01:46:15 +0000 (0:00:00.219) 0:01:07.428 ********** 2025-04-04 01:46:15.600764 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:15.602498 | orchestrator | 2025-04-04 01:46:15.603838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:46:15.604482 | orchestrator | Friday 04 April 2025 01:46:15 +0000 (0:00:00.247) 0:01:07.675 ********** 2025-04-04 01:46:15.826562 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:15.828774 | orchestrator | 2025-04-04 01:46:15.829333 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:46:15.829426 | orchestrator | Friday 04 April 2025 01:46:15 +0000 (0:00:00.225) 0:01:07.901 ********** 2025-04-04 01:46:17.218533 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-04 01:46:17.219226 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-04 01:46:17.220303 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-04 01:46:17.221707 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-04 01:46:17.223119 | orchestrator | 2025-04-04 01:46:17.224079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:46:17.225134 | orchestrator | Friday 04 April 2025 01:46:17 +0000 (0:00:01.392) 0:01:09.294 ********** 2025-04-04 01:46:17.468826 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:17.469060 | orchestrator | 2025-04-04 01:46:17.470300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:46:17.472030 | orchestrator | Friday 04 April 2025 01:46:17 +0000 (0:00:00.246) 0:01:09.540 ********** 2025-04-04 01:46:17.694418 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:17.695496 | orchestrator | 2025-04-04 01:46:17.695539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:46:17.696629 | orchestrator | Friday 04 April 2025 01:46:17 +0000 (0:00:00.231) 0:01:09.771 ********** 2025-04-04 01:46:17.908990 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:17.910301 | orchestrator | 2025-04-04 01:46:17.910342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-04 01:46:17.910711 | orchestrator | Friday 04 April 2025 01:46:17 +0000 (0:00:00.213) 0:01:09.985 ********** 2025-04-04 01:46:18.140011 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:18.140733 | orchestrator | 2025-04-04 01:46:18.142849 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-04 01:46:18.146559 | orchestrator | Friday 04 April 2025 01:46:18 +0000 (0:00:00.229) 0:01:10.215 ********** 2025-04-04 01:46:18.318916 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:18.319032 | orchestrator | 2025-04-04 01:46:18.320910 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-04 01:46:18.325334 | orchestrator | Friday 04 April 2025 01:46:18 +0000 (0:00:00.180) 0:01:10.395 ********** 2025-04-04 01:46:18.627047 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c80404de-2a7c-53fa-825b-8df99123a17e'}}) 2025-04-04 01:46:18.629580 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '722fade9-82b0-5f70-b367-45676e1969e2'}}) 2025-04-04 01:46:18.630685 | orchestrator | 2025-04-04 01:46:18.631209 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-04 01:46:18.631568 | orchestrator | Friday 04 April 2025 01:46:18 +0000 (0:00:00.297) 0:01:10.692 ********** 2025-04-04 01:46:20.711754 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'}) 2025-04-04 01:46:20.713721 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'}) 2025-04-04 01:46:20.713758 | orchestrator | 2025-04-04 01:46:20.714917 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-04 01:46:20.715919 | orchestrator | Friday 04 April 2025 01:46:20 +0000 (0:00:02.091) 0:01:12.784 ********** 2025-04-04 01:46:20.909190 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:20.914592 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:20.915916 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:20.916563 | orchestrator | 2025-04-04 01:46:20.917838 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-04 01:46:20.918121 | orchestrator | Friday 04 April 2025 01:46:20 +0000 (0:00:00.200) 0:01:12.984 ********** 2025-04-04 01:46:22.244177 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'}) 2025-04-04 01:46:22.244396 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'}) 2025-04-04 01:46:22.247916 | orchestrator | 2025-04-04 01:46:22.433613 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-04 01:46:22.433679 | orchestrator | Friday 04 April 2025 01:46:22 +0000 (0:00:01.334) 0:01:14.319 ********** 2025-04-04 01:46:22.433708 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:22.434450 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:22.434486 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:22.435169 | orchestrator | 2025-04-04 01:46:22.436244 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-04 01:46:22.439126 | orchestrator | Friday 04 April 2025 01:46:22 +0000 (0:00:00.190) 0:01:14.510 ********** 2025-04-04 01:46:22.593402 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:22.594121 | orchestrator | 2025-04-04 01:46:22.596417 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-04 01:46:22.597216 | orchestrator | Friday 04 April 2025 01:46:22 +0000 (0:00:00.160) 0:01:14.670 ********** 2025-04-04 01:46:22.798339 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:22.799707 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:22.801088 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:22.801842 | orchestrator | 2025-04-04 01:46:22.803124 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-04 01:46:22.803949 | orchestrator | Friday 04 April 2025 01:46:22 +0000 (0:00:00.203) 0:01:14.873 ********** 2025-04-04 01:46:22.980099 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:22.981040 | orchestrator | 2025-04-04 01:46:22.982341 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-04 01:46:22.983735 | orchestrator | Friday 04 April 2025 01:46:22 +0000 (0:00:00.183) 0:01:15.057 ********** 2025-04-04 01:46:23.174170 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:23.174884 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:23.178714 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:23.179251 | orchestrator | 2025-04-04 01:46:23.180246 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-04 01:46:23.180722 | orchestrator | Friday 04 April 2025 01:46:23 +0000 (0:00:00.192) 0:01:15.249 ********** 2025-04-04 01:46:23.330614 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:23.331551 | orchestrator | 2025-04-04 01:46:23.331886 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-04 01:46:23.332931 | orchestrator | Friday 04 April 2025 01:46:23 +0000 (0:00:00.157) 0:01:15.407 ********** 2025-04-04 01:46:23.527195 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:23.527877 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:23.528742 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:23.529581 | orchestrator | 2025-04-04 01:46:23.530515 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-04 01:46:23.531562 | orchestrator | Friday 04 April 2025 01:46:23 +0000 (0:00:00.195) 0:01:15.603 ********** 2025-04-04 01:46:23.685700 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:46:23.686529 | orchestrator | 2025-04-04 01:46:23.686646 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-04 01:46:23.687982 | orchestrator | Friday 04 April 2025 01:46:23 +0000 (0:00:00.160) 0:01:15.763 ********** 2025-04-04 01:46:23.898425 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:23.899052 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:23.899084 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:23.899102 | orchestrator | 2025-04-04 01:46:23.899124 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-04 01:46:24.068199 | orchestrator | Friday 04 April 2025 01:46:23 +0000 (0:00:00.205) 0:01:15.968 ********** 2025-04-04 01:46:24.068323 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:24.068474 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:24.068499 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:24.068523 | orchestrator | 2025-04-04 01:46:24.069420 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-04 01:46:24.070175 | orchestrator | Friday 04 April 2025 01:46:24 +0000 (0:00:00.175) 0:01:16.144 ********** 2025-04-04 01:46:24.550597 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:24.551383 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:24.552170 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:24.553088 | orchestrator | 2025-04-04 01:46:24.554844 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-04 01:46:24.557535 | orchestrator | Friday 04 April 2025 01:46:24 +0000 (0:00:00.481) 0:01:16.625 ********** 2025-04-04 01:46:24.714952 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:24.715878 | orchestrator | 2025-04-04 01:46:24.715926 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-04 01:46:24.716192 | orchestrator | Friday 04 April 2025 01:46:24 +0000 (0:00:00.166) 0:01:16.792 ********** 2025-04-04 01:46:24.876051 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:24.876646 | orchestrator | 2025-04-04 01:46:24.877215 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-04 01:46:24.877821 | orchestrator | Friday 04 April 2025 01:46:24 +0000 (0:00:00.159) 0:01:16.951 ********** 2025-04-04 01:46:25.034537 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:25.035089 | orchestrator | 2025-04-04 01:46:25.035304 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-04 01:46:25.036163 | orchestrator | Friday 04 April 2025 01:46:25 +0000 (0:00:00.160) 0:01:17.111 ********** 2025-04-04 01:46:25.198432 | orchestrator | ok: [testbed-node-5] => { 2025-04-04 01:46:25.198899 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-04 01:46:25.199247 | orchestrator | } 2025-04-04 01:46:25.200301 | orchestrator | 2025-04-04 01:46:25.200655 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-04 01:46:25.200977 | orchestrator | Friday 04 April 2025 01:46:25 +0000 (0:00:00.164) 0:01:17.276 ********** 2025-04-04 01:46:25.354691 | orchestrator | ok: [testbed-node-5] => { 2025-04-04 01:46:25.355109 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-04 01:46:25.356441 | orchestrator | } 2025-04-04 01:46:25.357471 | orchestrator | 2025-04-04 01:46:25.357959 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-04 01:46:25.359256 | orchestrator | Friday 04 April 2025 01:46:25 +0000 (0:00:00.155) 0:01:17.431 ********** 2025-04-04 01:46:25.516923 | orchestrator | ok: [testbed-node-5] => { 2025-04-04 01:46:25.518506 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-04 01:46:25.520171 | orchestrator | } 2025-04-04 01:46:25.521918 | orchestrator | 2025-04-04 01:46:25.522683 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-04 01:46:25.524088 | orchestrator | Friday 04 April 2025 01:46:25 +0000 (0:00:00.160) 0:01:17.592 ********** 2025-04-04 01:46:26.031508 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:46:26.038537 | orchestrator | 2025-04-04 01:46:26.038791 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-04 01:46:26.039535 | orchestrator | Friday 04 April 2025 01:46:26 +0000 (0:00:00.502) 0:01:18.094 ********** 2025-04-04 01:46:26.550262 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:46:26.550767 | orchestrator | 2025-04-04 01:46:26.550805 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-04 01:46:26.551663 | orchestrator | Friday 04 April 2025 01:46:26 +0000 (0:00:00.525) 0:01:18.620 ********** 2025-04-04 01:46:27.142283 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:46:27.143522 | orchestrator | 2025-04-04 01:46:27.144857 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-04 01:46:27.146420 | orchestrator | Friday 04 April 2025 01:46:27 +0000 (0:00:00.596) 0:01:19.217 ********** 2025-04-04 01:46:27.588173 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:46:27.589068 | orchestrator | 2025-04-04 01:46:27.589285 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-04 01:46:27.590174 | orchestrator | Friday 04 April 2025 01:46:27 +0000 (0:00:00.448) 0:01:19.666 ********** 2025-04-04 01:46:27.721906 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:27.722960 | orchestrator | 2025-04-04 01:46:27.723994 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-04 01:46:27.725189 | orchestrator | Friday 04 April 2025 01:46:27 +0000 (0:00:00.133) 0:01:19.799 ********** 2025-04-04 01:46:27.846478 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:27.856051 | orchestrator | 2025-04-04 01:46:27.856685 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-04 01:46:27.856727 | orchestrator | Friday 04 April 2025 01:46:27 +0000 (0:00:00.124) 0:01:19.923 ********** 2025-04-04 01:46:28.012760 | orchestrator | ok: [testbed-node-5] => { 2025-04-04 01:46:28.014445 | orchestrator |  "vgs_report": { 2025-04-04 01:46:28.014817 | orchestrator |  "vg": [] 2025-04-04 01:46:28.017522 | orchestrator |  } 2025-04-04 01:46:28.017809 | orchestrator | } 2025-04-04 01:46:28.018778 | orchestrator | 2025-04-04 01:46:28.019634 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-04 01:46:28.020846 | orchestrator | Friday 04 April 2025 01:46:28 +0000 (0:00:00.165) 0:01:20.089 ********** 2025-04-04 01:46:28.173389 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:28.174582 | orchestrator | 2025-04-04 01:46:28.175615 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-04 01:46:28.178483 | orchestrator | Friday 04 April 2025 01:46:28 +0000 (0:00:00.161) 0:01:20.251 ********** 2025-04-04 01:46:28.326974 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:28.327531 | orchestrator | 2025-04-04 01:46:28.328805 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-04 01:46:28.329296 | orchestrator | Friday 04 April 2025 01:46:28 +0000 (0:00:00.152) 0:01:20.403 ********** 2025-04-04 01:46:28.467947 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:28.468636 | orchestrator | 2025-04-04 01:46:28.469598 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-04 01:46:28.470245 | orchestrator | Friday 04 April 2025 01:46:28 +0000 (0:00:00.140) 0:01:20.543 ********** 2025-04-04 01:46:28.619738 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:28.620291 | orchestrator | 2025-04-04 01:46:28.621124 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-04 01:46:28.621884 | orchestrator | Friday 04 April 2025 01:46:28 +0000 (0:00:00.152) 0:01:20.696 ********** 2025-04-04 01:46:28.785906 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:28.786568 | orchestrator | 2025-04-04 01:46:28.788497 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-04 01:46:28.788967 | orchestrator | Friday 04 April 2025 01:46:28 +0000 (0:00:00.165) 0:01:20.862 ********** 2025-04-04 01:46:28.949072 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:28.949180 | orchestrator | 2025-04-04 01:46:28.950211 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-04 01:46:28.950855 | orchestrator | Friday 04 April 2025 01:46:28 +0000 (0:00:00.161) 0:01:21.023 ********** 2025-04-04 01:46:29.112224 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:29.113040 | orchestrator | 2025-04-04 01:46:29.113419 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-04 01:46:29.114890 | orchestrator | Friday 04 April 2025 01:46:29 +0000 (0:00:00.165) 0:01:21.189 ********** 2025-04-04 01:46:29.260893 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:29.261052 | orchestrator | 2025-04-04 01:46:29.262103 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-04 01:46:29.262944 | orchestrator | Friday 04 April 2025 01:46:29 +0000 (0:00:00.147) 0:01:21.337 ********** 2025-04-04 01:46:29.420548 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:29.421780 | orchestrator | 2025-04-04 01:46:29.422681 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-04 01:46:29.425382 | orchestrator | Friday 04 April 2025 01:46:29 +0000 (0:00:00.158) 0:01:21.495 ********** 2025-04-04 01:46:29.873878 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:29.874119 | orchestrator | 2025-04-04 01:46:29.874593 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-04 01:46:29.874695 | orchestrator | Friday 04 April 2025 01:46:29 +0000 (0:00:00.456) 0:01:21.951 ********** 2025-04-04 01:46:30.044686 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:30.044820 | orchestrator | 2025-04-04 01:46:30.044848 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-04 01:46:30.045295 | orchestrator | Friday 04 April 2025 01:46:30 +0000 (0:00:00.160) 0:01:22.112 ********** 2025-04-04 01:46:30.209231 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:30.210127 | orchestrator | 2025-04-04 01:46:30.210751 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-04 01:46:30.210813 | orchestrator | Friday 04 April 2025 01:46:30 +0000 (0:00:00.174) 0:01:22.287 ********** 2025-04-04 01:46:30.368073 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:30.368787 | orchestrator | 2025-04-04 01:46:30.369335 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-04 01:46:30.370160 | orchestrator | Friday 04 April 2025 01:46:30 +0000 (0:00:00.158) 0:01:22.445 ********** 2025-04-04 01:46:30.527552 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:30.528544 | orchestrator | 2025-04-04 01:46:30.528827 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-04 01:46:30.528858 | orchestrator | Friday 04 April 2025 01:46:30 +0000 (0:00:00.159) 0:01:22.605 ********** 2025-04-04 01:46:30.724761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:30.725280 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:30.726272 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:30.729325 | orchestrator | 2025-04-04 01:46:30.730090 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-04 01:46:30.730465 | orchestrator | Friday 04 April 2025 01:46:30 +0000 (0:00:00.192) 0:01:22.798 ********** 2025-04-04 01:46:30.898654 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:30.899140 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:30.900242 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:30.900681 | orchestrator | 2025-04-04 01:46:30.901172 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-04 01:46:30.901819 | orchestrator | Friday 04 April 2025 01:46:30 +0000 (0:00:00.178) 0:01:22.977 ********** 2025-04-04 01:46:31.098934 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:31.100441 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:31.101015 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:31.101895 | orchestrator | 2025-04-04 01:46:31.102745 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-04 01:46:31.103756 | orchestrator | Friday 04 April 2025 01:46:31 +0000 (0:00:00.198) 0:01:23.175 ********** 2025-04-04 01:46:31.280809 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:31.281032 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:31.281067 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:31.282002 | orchestrator | 2025-04-04 01:46:31.283103 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-04 01:46:31.469060 | orchestrator | Friday 04 April 2025 01:46:31 +0000 (0:00:00.182) 0:01:23.358 ********** 2025-04-04 01:46:31.469184 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:31.469972 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:31.470973 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:31.471509 | orchestrator | 2025-04-04 01:46:31.472472 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-04 01:46:31.472846 | orchestrator | Friday 04 April 2025 01:46:31 +0000 (0:00:00.188) 0:01:23.546 ********** 2025-04-04 01:46:31.672122 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:31.672781 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:31.674282 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:31.677461 | orchestrator | 2025-04-04 01:46:32.171707 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-04 01:46:32.171755 | orchestrator | Friday 04 April 2025 01:46:31 +0000 (0:00:00.202) 0:01:23.749 ********** 2025-04-04 01:46:32.171779 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:32.172224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:32.172617 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:32.173268 | orchestrator | 2025-04-04 01:46:32.173869 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-04 01:46:32.174244 | orchestrator | Friday 04 April 2025 01:46:32 +0000 (0:00:00.499) 0:01:24.248 ********** 2025-04-04 01:46:32.381781 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:32.382973 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:32.383522 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:32.385015 | orchestrator | 2025-04-04 01:46:32.385329 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-04 01:46:32.385980 | orchestrator | Friday 04 April 2025 01:46:32 +0000 (0:00:00.210) 0:01:24.458 ********** 2025-04-04 01:46:32.996537 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:46:32.996958 | orchestrator | 2025-04-04 01:46:32.997809 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-04 01:46:32.999457 | orchestrator | Friday 04 April 2025 01:46:32 +0000 (0:00:00.613) 0:01:25.072 ********** 2025-04-04 01:46:33.583582 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:46:33.584572 | orchestrator | 2025-04-04 01:46:33.584608 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-04 01:46:33.585180 | orchestrator | Friday 04 April 2025 01:46:33 +0000 (0:00:00.588) 0:01:25.661 ********** 2025-04-04 01:46:33.756770 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:46:33.756969 | orchestrator | 2025-04-04 01:46:33.757472 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-04 01:46:33.757507 | orchestrator | Friday 04 April 2025 01:46:33 +0000 (0:00:00.173) 0:01:25.834 ********** 2025-04-04 01:46:33.965155 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'vg_name': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'}) 2025-04-04 01:46:33.965629 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'vg_name': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'}) 2025-04-04 01:46:33.965669 | orchestrator | 2025-04-04 01:46:33.966630 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-04 01:46:33.966703 | orchestrator | Friday 04 April 2025 01:46:33 +0000 (0:00:00.208) 0:01:26.042 ********** 2025-04-04 01:46:34.185844 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:34.186503 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:34.187199 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:34.188455 | orchestrator | 2025-04-04 01:46:34.188687 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-04 01:46:34.189188 | orchestrator | Friday 04 April 2025 01:46:34 +0000 (0:00:00.218) 0:01:26.261 ********** 2025-04-04 01:46:34.370693 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:34.371183 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:34.372333 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:34.372483 | orchestrator | 2025-04-04 01:46:34.373597 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-04 01:46:34.374077 | orchestrator | Friday 04 April 2025 01:46:34 +0000 (0:00:00.187) 0:01:26.448 ********** 2025-04-04 01:46:34.594169 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80404de-2a7c-53fa-825b-8df99123a17e', 'data_vg': 'ceph-c80404de-2a7c-53fa-825b-8df99123a17e'})  2025-04-04 01:46:34.594830 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-722fade9-82b0-5f70-b367-45676e1969e2', 'data_vg': 'ceph-722fade9-82b0-5f70-b367-45676e1969e2'})  2025-04-04 01:46:34.595811 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:34.596545 | orchestrator | 2025-04-04 01:46:34.597255 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-04 01:46:34.598573 | orchestrator | Friday 04 April 2025 01:46:34 +0000 (0:00:00.221) 0:01:26.670 ********** 2025-04-04 01:46:35.371977 | orchestrator | ok: [testbed-node-5] => { 2025-04-04 01:46:35.372971 | orchestrator |  "lvm_report": { 2025-04-04 01:46:35.373437 | orchestrator |  "lv": [ 2025-04-04 01:46:35.373960 | orchestrator |  { 2025-04-04 01:46:35.376870 | orchestrator |  "lv_name": "osd-block-722fade9-82b0-5f70-b367-45676e1969e2", 2025-04-04 01:46:35.377115 | orchestrator |  "vg_name": "ceph-722fade9-82b0-5f70-b367-45676e1969e2" 2025-04-04 01:46:35.377735 | orchestrator |  }, 2025-04-04 01:46:35.378145 | orchestrator |  { 2025-04-04 01:46:35.378640 | orchestrator |  "lv_name": "osd-block-c80404de-2a7c-53fa-825b-8df99123a17e", 2025-04-04 01:46:35.379004 | orchestrator |  "vg_name": "ceph-c80404de-2a7c-53fa-825b-8df99123a17e" 2025-04-04 01:46:35.379573 | orchestrator |  } 2025-04-04 01:46:35.380167 | orchestrator |  ], 2025-04-04 01:46:35.380531 | orchestrator |  "pv": [ 2025-04-04 01:46:35.381591 | orchestrator |  { 2025-04-04 01:46:35.381652 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-04 01:46:35.382528 | orchestrator |  "vg_name": "ceph-c80404de-2a7c-53fa-825b-8df99123a17e" 2025-04-04 01:46:35.382830 | orchestrator |  }, 2025-04-04 01:46:35.383485 | orchestrator |  { 2025-04-04 01:46:35.384311 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-04 01:46:35.384557 | orchestrator |  "vg_name": "ceph-722fade9-82b0-5f70-b367-45676e1969e2" 2025-04-04 01:46:35.385134 | orchestrator |  } 2025-04-04 01:46:35.385298 | orchestrator |  ] 2025-04-04 01:46:35.385593 | orchestrator |  } 2025-04-04 01:46:35.385904 | orchestrator | } 2025-04-04 01:46:35.386526 | orchestrator | 2025-04-04 01:46:35.386780 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 01:46:35.388939 | orchestrator | 2025-04-04 01:46:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 01:46:35.389026 | orchestrator | 2025-04-04 01:46:35 | INFO  | Please wait and do not abort execution. 2025-04-04 01:46:35.389668 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-04 01:46:35.390157 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-04 01:46:35.390578 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-04 01:46:35.390961 | orchestrator | 2025-04-04 01:46:35.391410 | orchestrator | 2025-04-04 01:46:35.392020 | orchestrator | 2025-04-04 01:46:35.392275 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-04 01:46:35.392823 | orchestrator | Friday 04 April 2025 01:46:35 +0000 (0:00:00.777) 0:01:27.447 ********** 2025-04-04 01:46:35.393234 | orchestrator | =============================================================================== 2025-04-04 01:46:35.393846 | orchestrator | Create block VGs -------------------------------------------------------- 6.20s 2025-04-04 01:46:35.394147 | orchestrator | Create block LVs -------------------------------------------------------- 3.98s 2025-04-04 01:46:35.394608 | orchestrator | Print LVM report data --------------------------------------------------- 3.02s 2025-04-04 01:46:35.394939 | orchestrator | Add known links to the list of available block devices ------------------ 2.03s 2025-04-04 01:46:35.395426 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.84s 2025-04-04 01:46:35.395746 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.73s 2025-04-04 01:46:35.396507 | orchestrator | Add known partitions to the list of available block devices ------------- 1.70s 2025-04-04 01:46:35.396707 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.70s 2025-04-04 01:46:35.397137 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.67s 2025-04-04 01:46:35.397471 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.65s 2025-04-04 01:46:35.397785 | orchestrator | Add known partitions to the list of available block devices ------------- 1.39s 2025-04-04 01:46:35.398241 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 1.11s 2025-04-04 01:46:35.398560 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2025-04-04 01:46:35.398897 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2025-04-04 01:46:35.399339 | orchestrator | Create list of VG/LV names ---------------------------------------------- 0.94s 2025-04-04 01:46:35.399608 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.89s 2025-04-04 01:46:35.400276 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.87s 2025-04-04 01:46:35.400460 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.87s 2025-04-04 01:46:35.400866 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.86s 2025-04-04 01:46:35.401202 | orchestrator | Count OSDs put on ceph_db_wal_devices defined in lvm_volumes ------------ 0.86s 2025-04-04 01:46:37.959741 | orchestrator | 2025-04-04 01:46:37 | INFO  | Task 5943d250-9ddd-4a7c-8d2f-bf2f70f21f53 (facts) was prepared for execution. 2025-04-04 01:46:41.807304 | orchestrator | 2025-04-04 01:46:37 | INFO  | It takes a moment until task 5943d250-9ddd-4a7c-8d2f-bf2f70f21f53 (facts) has been started and output is visible here. 2025-04-04 01:46:41.807503 | orchestrator | 2025-04-04 01:46:41.807580 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-04 01:46:41.807603 | orchestrator | 2025-04-04 01:46:41.811317 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-04 01:46:43.381085 | orchestrator | Friday 04 April 2025 01:46:41 +0000 (0:00:00.247) 0:00:00.247 ********** 2025-04-04 01:46:43.381237 | orchestrator | ok: [testbed-manager] 2025-04-04 01:46:43.381323 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:46:43.382382 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:46:43.382587 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:46:43.382678 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:46:43.384390 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:46:43.384454 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:46:43.574554 | orchestrator | 2025-04-04 01:46:43.574646 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-04 01:46:43.574664 | orchestrator | Friday 04 April 2025 01:46:43 +0000 (0:00:01.570) 0:00:01.817 ********** 2025-04-04 01:46:43.574718 | orchestrator | skipping: [testbed-manager] 2025-04-04 01:46:43.677719 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:46:43.768128 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:46:43.886530 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:46:43.976562 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:46:44.899846 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:44.900800 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:44.900874 | orchestrator | 2025-04-04 01:46:44.900936 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-04 01:46:44.902489 | orchestrator | 2025-04-04 01:46:44.903931 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-04 01:46:44.903992 | orchestrator | Friday 04 April 2025 01:46:44 +0000 (0:00:01.519) 0:00:03.337 ********** 2025-04-04 01:46:48.963559 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:46:48.964869 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:46:48.969205 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:46:48.970185 | orchestrator | ok: [testbed-manager] 2025-04-04 01:46:48.973169 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:46:48.973420 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:46:48.974124 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:46:48.974589 | orchestrator | 2025-04-04 01:46:48.975287 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-04 01:46:48.975758 | orchestrator | 2025-04-04 01:46:48.976186 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-04 01:46:48.976673 | orchestrator | Friday 04 April 2025 01:46:48 +0000 (0:00:04.069) 0:00:07.407 ********** 2025-04-04 01:46:49.383780 | orchestrator | skipping: [testbed-manager] 2025-04-04 01:46:49.485929 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:46:49.580104 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:46:49.684124 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:46:49.770104 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:46:49.818591 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:46:49.818990 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:46:49.820207 | orchestrator | 2025-04-04 01:46:49.820610 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 01:46:49.821514 | orchestrator | 2025-04-04 01:46:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-04 01:46:49.822547 | orchestrator | 2025-04-04 01:46:49 | INFO  | Please wait and do not abort execution. 2025-04-04 01:46:49.822580 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 01:46:49.823579 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 01:46:49.824850 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 01:46:49.826301 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 01:46:49.828034 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 01:46:49.828186 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 01:46:49.829245 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 01:46:49.830006 | orchestrator | 2025-04-04 01:46:49.831052 | orchestrator | Friday 04 April 2025 01:46:49 +0000 (0:00:00.855) 0:00:08.262 ********** 2025-04-04 01:46:49.831488 | orchestrator | =============================================================================== 2025-04-04 01:46:49.832397 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.07s 2025-04-04 01:46:49.833119 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.57s 2025-04-04 01:46:49.833888 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.52s 2025-04-04 01:46:49.834905 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.86s 2025-04-04 01:46:50.714450 | orchestrator | 2025-04-04 01:46:50.715702 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Apr 4 01:46:50 UTC 2025 2025-04-04 01:46:52.457550 | orchestrator | 2025-04-04 01:46:52.457703 | orchestrator | 2025-04-04 01:46:52 | INFO  | Collection nutshell is prepared for execution 2025-04-04 01:46:52.463571 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [0] - dotfiles 2025-04-04 01:46:52.463624 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [0] - homer 2025-04-04 01:46:52.464950 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [0] - netdata 2025-04-04 01:46:52.464970 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [0] - openstackclient 2025-04-04 01:46:52.464981 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [0] - phpmyadmin 2025-04-04 01:46:52.464992 | orchestrator | 2025-04-04 01:46:52 | INFO  | A [0] - common 2025-04-04 01:46:52.465006 | orchestrator | 2025-04-04 01:46:52 | INFO  | A [1] -- loadbalancer 2025-04-04 01:46:52.465190 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [2] --- opensearch 2025-04-04 01:46:52.465208 | orchestrator | 2025-04-04 01:46:52 | INFO  | A [2] --- mariadb-ng 2025-04-04 01:46:52.465219 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [3] ---- horizon 2025-04-04 01:46:52.465229 | orchestrator | 2025-04-04 01:46:52 | INFO  | A [3] ---- keystone 2025-04-04 01:46:52.465240 | orchestrator | 2025-04-04 01:46:52 | INFO  | A [4] ----- neutron 2025-04-04 01:46:52.465250 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [5] ------ wait-for-nova 2025-04-04 01:46:52.465289 | orchestrator | 2025-04-04 01:46:52 | INFO  | A [5] ------ octavia 2025-04-04 01:46:52.465417 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [4] ----- barbican 2025-04-04 01:46:52.465735 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [4] ----- designate 2025-04-04 01:46:52.465757 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [4] ----- ironic 2025-04-04 01:46:52.465956 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [4] ----- placement 2025-04-04 01:46:52.465973 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [4] ----- magnum 2025-04-04 01:46:52.465988 | orchestrator | 2025-04-04 01:46:52 | INFO  | A [1] -- openvswitch 2025-04-04 01:46:52.466394 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [2] --- ovn 2025-04-04 01:46:52.466417 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [1] -- memcached 2025-04-04 01:46:52.466486 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [1] -- redis 2025-04-04 01:46:52.466499 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [1] -- rabbitmq-ng 2025-04-04 01:46:52.466512 | orchestrator | 2025-04-04 01:46:52 | INFO  | A [0] - kubernetes 2025-04-04 01:46:52.467756 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [1] -- kubeconfig 2025-04-04 01:46:52.467875 | orchestrator | 2025-04-04 01:46:52 | INFO  | A [1] -- copy-kubeconfig 2025-04-04 01:46:52.468706 | orchestrator | 2025-04-04 01:46:52 | INFO  | A [0] - ceph 2025-04-04 01:46:52.468744 | orchestrator | 2025-04-04 01:46:52 | INFO  | A [1] -- ceph-pools 2025-04-04 01:46:52.659109 | orchestrator | 2025-04-04 01:46:52 | INFO  | A [2] --- copy-ceph-keys 2025-04-04 01:46:52.659176 | orchestrator | 2025-04-04 01:46:52 | INFO  | A [3] ---- cephclient 2025-04-04 01:46:52.659192 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-04-04 01:46:52.659206 | orchestrator | 2025-04-04 01:46:52 | INFO  | A [4] ----- wait-for-keystone 2025-04-04 01:46:52.659221 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [5] ------ kolla-ceph-rgw 2025-04-04 01:46:52.659235 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [5] ------ glance 2025-04-04 01:46:52.659249 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [5] ------ cinder 2025-04-04 01:46:52.659262 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [5] ------ nova 2025-04-04 01:46:52.659276 | orchestrator | 2025-04-04 01:46:52 | INFO  | A [4] ----- prometheus 2025-04-04 01:46:52.659290 | orchestrator | 2025-04-04 01:46:52 | INFO  | D [5] ------ grafana 2025-04-04 01:46:52.659315 | orchestrator | 2025-04-04 01:46:52 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-04-04 01:46:55.642094 | orchestrator | 2025-04-04 01:46:52 | INFO  | Tasks are running in the background 2025-04-04 01:46:55.642256 | orchestrator | 2025-04-04 01:46:55 | INFO  | No task IDs specified, wait for all currently running tasks 2025-04-04 01:46:57.764636 | orchestrator | 2025-04-04 01:46:57 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:46:57.766169 | orchestrator | 2025-04-04 01:46:57 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:46:57.766206 | orchestrator | 2025-04-04 01:46:57 | INFO  | Task 9312bec5-0757-4d44-a029-6870706ef046 is in state STARTED 2025-04-04 01:46:57.766228 | orchestrator | 2025-04-04 01:46:57 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:46:57.767046 | orchestrator | 2025-04-04 01:46:57 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:46:57.769078 | orchestrator | 2025-04-04 01:46:57 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:46:57.774099 | orchestrator | 2025-04-04 01:46:57 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:00.835591 | orchestrator | 2025-04-04 01:47:00 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:00.836487 | orchestrator | 2025-04-04 01:47:00 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:00.840178 | orchestrator | 2025-04-04 01:47:00 | INFO  | Task 9312bec5-0757-4d44-a029-6870706ef046 is in state STARTED 2025-04-04 01:47:00.840662 | orchestrator | 2025-04-04 01:47:00 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:00.845005 | orchestrator | 2025-04-04 01:47:00 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:00.845818 | orchestrator | 2025-04-04 01:47:00 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:03.939022 | orchestrator | 2025-04-04 01:47:00 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:03.939171 | orchestrator | 2025-04-04 01:47:03 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:03.939253 | orchestrator | 2025-04-04 01:47:03 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:03.939674 | orchestrator | 2025-04-04 01:47:03 | INFO  | Task 9312bec5-0757-4d44-a029-6870706ef046 is in state STARTED 2025-04-04 01:47:03.940230 | orchestrator | 2025-04-04 01:47:03 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:03.942907 | orchestrator | 2025-04-04 01:47:03 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:03.943719 | orchestrator | 2025-04-04 01:47:03 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:07.063409 | orchestrator | 2025-04-04 01:47:03 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:07.063583 | orchestrator | 2025-04-04 01:47:07 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:07.066590 | orchestrator | 2025-04-04 01:47:07 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:07.068590 | orchestrator | 2025-04-04 01:47:07 | INFO  | Task 9312bec5-0757-4d44-a029-6870706ef046 is in state STARTED 2025-04-04 01:47:07.070413 | orchestrator | 2025-04-04 01:47:07 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:07.073380 | orchestrator | 2025-04-04 01:47:07 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:10.184868 | orchestrator | 2025-04-04 01:47:07 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:10.185015 | orchestrator | 2025-04-04 01:47:07 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:10.185054 | orchestrator | 2025-04-04 01:47:10 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:10.186783 | orchestrator | 2025-04-04 01:47:10 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:10.186820 | orchestrator | 2025-04-04 01:47:10 | INFO  | Task 9312bec5-0757-4d44-a029-6870706ef046 is in state STARTED 2025-04-04 01:47:10.193521 | orchestrator | 2025-04-04 01:47:10 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:13.316551 | orchestrator | 2025-04-04 01:47:10 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:13.316698 | orchestrator | 2025-04-04 01:47:10 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:13.316773 | orchestrator | 2025-04-04 01:47:10 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:13.316812 | orchestrator | 2025-04-04 01:47:13 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:13.326883 | orchestrator | 2025-04-04 01:47:13 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:13.336121 | orchestrator | 2025-04-04 01:47:13 | INFO  | Task 9312bec5-0757-4d44-a029-6870706ef046 is in state STARTED 2025-04-04 01:47:13.342181 | orchestrator | 2025-04-04 01:47:13 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:13.344678 | orchestrator | 2025-04-04 01:47:13 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:13.345620 | orchestrator | 2025-04-04 01:47:13 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:16.464175 | orchestrator | 2025-04-04 01:47:13 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:16.464334 | orchestrator | 2025-04-04 01:47:16 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:16.469149 | orchestrator | 2025-04-04 01:47:16 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:16.473626 | orchestrator | 2025-04-04 01:47:16 | INFO  | Task 9312bec5-0757-4d44-a029-6870706ef046 is in state STARTED 2025-04-04 01:47:16.475203 | orchestrator | 2025-04-04 01:47:16 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:16.481124 | orchestrator | 2025-04-04 01:47:16 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:16.484519 | orchestrator | 2025-04-04 01:47:16 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:16.487332 | orchestrator | 2025-04-04 01:47:16 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:19.572589 | orchestrator | 2025-04-04 01:47:19 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:19.576409 | orchestrator | 2025-04-04 01:47:19 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:19.576458 | orchestrator | 2025-04-04 01:47:19 | INFO  | Task 9312bec5-0757-4d44-a029-6870706ef046 is in state STARTED 2025-04-04 01:47:19.583461 | orchestrator | 2025-04-04 01:47:19 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:19.585787 | orchestrator | 2025-04-04 01:47:19 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:19.585847 | orchestrator | 2025-04-04 01:47:19 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:22.739782 | orchestrator | 2025-04-04 01:47:19 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:22.739940 | orchestrator | 2025-04-04 01:47:22 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:22.748791 | orchestrator | 2025-04-04 01:47:22 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:22.752920 | orchestrator | 2025-04-04 01:47:22 | INFO  | Task 9312bec5-0757-4d44-a029-6870706ef046 is in state STARTED 2025-04-04 01:47:22.758666 | orchestrator | 2025-04-04 01:47:22 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:22.768547 | orchestrator | 2025-04-04 01:47:22 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:22.769232 | orchestrator | 2025-04-04 01:47:22 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:22.770406 | orchestrator | 2025-04-04 01:47:22 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:25.829197 | orchestrator | 2025-04-04 01:47:25 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:25.830767 | orchestrator | 2025-04-04 01:47:25 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:25.831130 | orchestrator | 2025-04-04 01:47:25 | INFO  | Task 9312bec5-0757-4d44-a029-6870706ef046 is in state SUCCESS 2025-04-04 01:47:25.831854 | orchestrator | 2025-04-04 01:47:25.831887 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-04-04 01:47:25.831904 | orchestrator | 2025-04-04 01:47:25.831920 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-04-04 01:47:25.831935 | orchestrator | Friday 04 April 2025 01:47:05 +0000 (0:00:00.734) 0:00:00.734 ********** 2025-04-04 01:47:25.831951 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:47:25.831967 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:47:25.831983 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:47:25.831998 | orchestrator | changed: [testbed-manager] 2025-04-04 01:47:25.832013 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:47:25.832028 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:47:25.832043 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:47:25.832058 | orchestrator | 2025-04-04 01:47:25.832074 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-04-04 01:47:25.832089 | orchestrator | Friday 04 April 2025 01:47:10 +0000 (0:00:04.884) 0:00:05.618 ********** 2025-04-04 01:47:25.832105 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-04-04 01:47:25.832120 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-04-04 01:47:25.832143 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-04-04 01:47:25.832159 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-04-04 01:47:25.832174 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-04-04 01:47:25.832189 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-04-04 01:47:25.832204 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-04-04 01:47:25.832220 | orchestrator | 2025-04-04 01:47:25.832235 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-04-04 01:47:25.832251 | orchestrator | Friday 04 April 2025 01:47:15 +0000 (0:00:04.630) 0:00:10.249 ********** 2025-04-04 01:47:25.832270 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-04 01:47:12.240714', 'end': '2025-04-04 01:47:12.246598', 'delta': '0:00:00.005884', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-04 01:47:25.832296 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-04 01:47:12.418531', 'end': '2025-04-04 01:47:12.426671', 'delta': '0:00:00.008140', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-04 01:47:25.832333 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-04 01:47:12.344803', 'end': '2025-04-04 01:47:12.349289', 'delta': '0:00:00.004486', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-04 01:47:25.832414 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-04 01:47:12.976460', 'end': '2025-04-04 01:47:12.981392', 'delta': '0:00:00.004932', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-04 01:47:25.832433 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-04 01:47:13.471203', 'end': '2025-04-04 01:47:13.475933', 'delta': '0:00:00.004730', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-04 01:47:25.832447 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-04 01:47:14.033905', 'end': '2025-04-04 01:47:14.039281', 'delta': '0:00:00.005376', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-04 01:47:25.832467 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-04 01:47:14.551784', 'end': '2025-04-04 01:47:14.556657', 'delta': '0:00:00.004873', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-04 01:47:25.832490 | orchestrator | 2025-04-04 01:47:25.832504 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-04-04 01:47:25.832519 | orchestrator | Friday 04 April 2025 01:47:20 +0000 (0:00:04.628) 0:00:14.877 ********** 2025-04-04 01:47:25.832533 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-04-04 01:47:25.832547 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-04-04 01:47:25.832561 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-04-04 01:47:25.832575 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-04-04 01:47:25.832589 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-04-04 01:47:25.832602 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-04-04 01:47:25.832617 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-04-04 01:47:25.832631 | orchestrator | 2025-04-04 01:47:25.832645 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 01:47:25.832659 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:47:25.832674 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:47:25.832689 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:47:25.832709 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:47:25.835693 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:47:25.835728 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:47:25.835743 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:47:25.835759 | orchestrator | 2025-04-04 01:47:25.835774 | orchestrator | Friday 04 April 2025 01:47:24 +0000 (0:00:04.640) 0:00:19.518 ********** 2025-04-04 01:47:25.835790 | orchestrator | =============================================================================== 2025-04-04 01:47:25.835805 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.88s 2025-04-04 01:47:25.835820 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.64s 2025-04-04 01:47:25.835835 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 4.63s 2025-04-04 01:47:25.835851 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 4.63s 2025-04-04 01:47:25.835874 | orchestrator | 2025-04-04 01:47:25 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:25.836689 | orchestrator | 2025-04-04 01:47:25 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:25.840638 | orchestrator | 2025-04-04 01:47:25 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:25.840976 | orchestrator | 2025-04-04 01:47:25 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:28.968877 | orchestrator | 2025-04-04 01:47:28 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:28.970196 | orchestrator | 2025-04-04 01:47:28 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:28.978564 | orchestrator | 2025-04-04 01:47:28 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:47:28.983834 | orchestrator | 2025-04-04 01:47:28 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:28.996240 | orchestrator | 2025-04-04 01:47:28 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:29.001381 | orchestrator | 2025-04-04 01:47:28 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:32.115323 | orchestrator | 2025-04-04 01:47:28 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:32.115504 | orchestrator | 2025-04-04 01:47:32 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:32.128499 | orchestrator | 2025-04-04 01:47:32 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:32.141774 | orchestrator | 2025-04-04 01:47:32 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:47:32.148780 | orchestrator | 2025-04-04 01:47:32 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:32.155022 | orchestrator | 2025-04-04 01:47:32 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:32.159082 | orchestrator | 2025-04-04 01:47:32 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:32.164269 | orchestrator | 2025-04-04 01:47:32 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:35.295949 | orchestrator | 2025-04-04 01:47:35 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:35.299309 | orchestrator | 2025-04-04 01:47:35 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:35.299400 | orchestrator | 2025-04-04 01:47:35 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:47:35.305919 | orchestrator | 2025-04-04 01:47:35 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:35.311224 | orchestrator | 2025-04-04 01:47:35 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:35.315665 | orchestrator | 2025-04-04 01:47:35 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:38.459578 | orchestrator | 2025-04-04 01:47:35 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:38.459738 | orchestrator | 2025-04-04 01:47:38 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:38.461831 | orchestrator | 2025-04-04 01:47:38 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:38.461880 | orchestrator | 2025-04-04 01:47:38 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:47:38.466193 | orchestrator | 2025-04-04 01:47:38 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:38.470100 | orchestrator | 2025-04-04 01:47:38 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:38.470149 | orchestrator | 2025-04-04 01:47:38 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:41.572785 | orchestrator | 2025-04-04 01:47:38 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:41.572939 | orchestrator | 2025-04-04 01:47:41 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:41.573862 | orchestrator | 2025-04-04 01:47:41 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:41.582713 | orchestrator | 2025-04-04 01:47:41 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:47:41.588262 | orchestrator | 2025-04-04 01:47:41 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:41.592209 | orchestrator | 2025-04-04 01:47:41 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:41.609004 | orchestrator | 2025-04-04 01:47:41 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:41.610390 | orchestrator | 2025-04-04 01:47:41 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:44.706294 | orchestrator | 2025-04-04 01:47:44 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:44.709332 | orchestrator | 2025-04-04 01:47:44 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:44.712199 | orchestrator | 2025-04-04 01:47:44 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:47:44.714935 | orchestrator | 2025-04-04 01:47:44 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:44.716801 | orchestrator | 2025-04-04 01:47:44 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:44.720669 | orchestrator | 2025-04-04 01:47:44 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:47.800829 | orchestrator | 2025-04-04 01:47:44 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:47.800981 | orchestrator | 2025-04-04 01:47:47 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:47.804600 | orchestrator | 2025-04-04 01:47:47 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:47.811976 | orchestrator | 2025-04-04 01:47:47 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:47:47.815304 | orchestrator | 2025-04-04 01:47:47 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state STARTED 2025-04-04 01:47:47.820216 | orchestrator | 2025-04-04 01:47:47 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:47.823616 | orchestrator | 2025-04-04 01:47:47 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:47.825077 | orchestrator | 2025-04-04 01:47:47 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:50.920809 | orchestrator | 2025-04-04 01:47:50 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:50.921409 | orchestrator | 2025-04-04 01:47:50 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:50.923324 | orchestrator | 2025-04-04 01:47:50 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:47:50.923818 | orchestrator | 2025-04-04 01:47:50 | INFO  | Task 80576d94-e01b-4d4e-bc33-b26dde2d049f is in state SUCCESS 2025-04-04 01:47:50.925835 | orchestrator | 2025-04-04 01:47:50 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:50.927071 | orchestrator | 2025-04-04 01:47:50 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:54.002711 | orchestrator | 2025-04-04 01:47:50 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:54.002882 | orchestrator | 2025-04-04 01:47:54 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:54.005423 | orchestrator | 2025-04-04 01:47:54 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:54.006960 | orchestrator | 2025-04-04 01:47:54 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:47:54.007002 | orchestrator | 2025-04-04 01:47:54 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:54.009344 | orchestrator | 2025-04-04 01:47:54 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:47:54.011691 | orchestrator | 2025-04-04 01:47:54 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:57.075699 | orchestrator | 2025-04-04 01:47:54 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:47:57.075848 | orchestrator | 2025-04-04 01:47:57 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:47:57.076082 | orchestrator | 2025-04-04 01:47:57 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:47:57.079709 | orchestrator | 2025-04-04 01:47:57 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:47:57.083280 | orchestrator | 2025-04-04 01:47:57 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:47:57.091768 | orchestrator | 2025-04-04 01:47:57 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:47:57.096477 | orchestrator | 2025-04-04 01:47:57 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:47:57.097176 | orchestrator | 2025-04-04 01:47:57 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:00.165018 | orchestrator | 2025-04-04 01:48:00 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:00.167530 | orchestrator | 2025-04-04 01:48:00 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:00.169424 | orchestrator | 2025-04-04 01:48:00 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:00.169457 | orchestrator | 2025-04-04 01:48:00 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:48:00.169479 | orchestrator | 2025-04-04 01:48:00 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:00.170109 | orchestrator | 2025-04-04 01:48:00 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:48:00.170639 | orchestrator | 2025-04-04 01:48:00 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:03.234927 | orchestrator | 2025-04-04 01:48:03 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:03.235632 | orchestrator | 2025-04-04 01:48:03 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:03.239887 | orchestrator | 2025-04-04 01:48:03 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:06.348443 | orchestrator | 2025-04-04 01:48:03 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:48:06.348575 | orchestrator | 2025-04-04 01:48:03 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:06.348592 | orchestrator | 2025-04-04 01:48:03 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:48:06.348607 | orchestrator | 2025-04-04 01:48:03 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:06.348639 | orchestrator | 2025-04-04 01:48:06 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:06.350790 | orchestrator | 2025-04-04 01:48:06 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:06.350859 | orchestrator | 2025-04-04 01:48:06 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:06.356637 | orchestrator | 2025-04-04 01:48:06 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:48:06.360303 | orchestrator | 2025-04-04 01:48:06 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:09.506220 | orchestrator | 2025-04-04 01:48:06 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:48:09.506415 | orchestrator | 2025-04-04 01:48:06 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:09.506460 | orchestrator | 2025-04-04 01:48:09 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:09.511028 | orchestrator | 2025-04-04 01:48:09 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:09.517416 | orchestrator | 2025-04-04 01:48:09 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:09.518054 | orchestrator | 2025-04-04 01:48:09 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:48:09.525593 | orchestrator | 2025-04-04 01:48:09 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:09.525695 | orchestrator | 2025-04-04 01:48:09 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:48:09.525759 | orchestrator | 2025-04-04 01:48:09 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:12.639117 | orchestrator | 2025-04-04 01:48:12 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:12.643821 | orchestrator | 2025-04-04 01:48:12 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:12.645148 | orchestrator | 2025-04-04 01:48:12 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:12.651960 | orchestrator | 2025-04-04 01:48:12 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:48:12.655913 | orchestrator | 2025-04-04 01:48:12 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:15.763212 | orchestrator | 2025-04-04 01:48:12 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:48:15.763344 | orchestrator | 2025-04-04 01:48:12 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:15.763417 | orchestrator | 2025-04-04 01:48:15 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:15.773077 | orchestrator | 2025-04-04 01:48:15 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:15.791022 | orchestrator | 2025-04-04 01:48:15 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:15.824127 | orchestrator | 2025-04-04 01:48:15 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:48:15.825545 | orchestrator | 2025-04-04 01:48:15 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:15.832199 | orchestrator | 2025-04-04 01:48:15 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:48:18.928304 | orchestrator | 2025-04-04 01:48:15 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:18.928511 | orchestrator | 2025-04-04 01:48:18 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:18.932022 | orchestrator | 2025-04-04 01:48:18 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:18.932092 | orchestrator | 2025-04-04 01:48:18 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:18.934200 | orchestrator | 2025-04-04 01:48:18 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:48:18.937781 | orchestrator | 2025-04-04 01:48:18 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:18.941751 | orchestrator | 2025-04-04 01:48:18 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:48:22.074327 | orchestrator | 2025-04-04 01:48:18 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:22.074515 | orchestrator | 2025-04-04 01:48:22 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:22.076743 | orchestrator | 2025-04-04 01:48:22 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:22.076799 | orchestrator | 2025-04-04 01:48:22 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:22.076824 | orchestrator | 2025-04-04 01:48:22 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:48:22.079857 | orchestrator | 2025-04-04 01:48:22 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:22.085526 | orchestrator | 2025-04-04 01:48:22 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state STARTED 2025-04-04 01:48:25.176197 | orchestrator | 2025-04-04 01:48:22 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:25.176418 | orchestrator | 2025-04-04 01:48:25 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:25.179405 | orchestrator | 2025-04-04 01:48:25 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:25.183011 | orchestrator | 2025-04-04 01:48:25 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:25.187809 | orchestrator | 2025-04-04 01:48:25 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:48:25.191270 | orchestrator | 2025-04-04 01:48:25 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:25.193145 | orchestrator | 2025-04-04 01:48:25 | INFO  | Task 366bace1-15e8-415f-b9ae-2fd429ba5c08 is in state SUCCESS 2025-04-04 01:48:28.289845 | orchestrator | 2025-04-04 01:48:25 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:28.289997 | orchestrator | 2025-04-04 01:48:28 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:28.293758 | orchestrator | 2025-04-04 01:48:28 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:28.298391 | orchestrator | 2025-04-04 01:48:28 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:28.299859 | orchestrator | 2025-04-04 01:48:28 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:48:28.300276 | orchestrator | 2025-04-04 01:48:28 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:28.301220 | orchestrator | 2025-04-04 01:48:28 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:31.398201 | orchestrator | 2025-04-04 01:48:31 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:31.401464 | orchestrator | 2025-04-04 01:48:31 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:31.406150 | orchestrator | 2025-04-04 01:48:31 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:34.780926 | orchestrator | 2025-04-04 01:48:31 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:48:34.781040 | orchestrator | 2025-04-04 01:48:31 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:34.781076 | orchestrator | 2025-04-04 01:48:31 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:34.781109 | orchestrator | 2025-04-04 01:48:34 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:34.781181 | orchestrator | 2025-04-04 01:48:34 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:34.781858 | orchestrator | 2025-04-04 01:48:34 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:34.782143 | orchestrator | 2025-04-04 01:48:34 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:48:34.783081 | orchestrator | 2025-04-04 01:48:34 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:34.784507 | orchestrator | 2025-04-04 01:48:34 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:37.847444 | orchestrator | 2025-04-04 01:48:37 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:37.852532 | orchestrator | 2025-04-04 01:48:37 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:37.854688 | orchestrator | 2025-04-04 01:48:37 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:37.858573 | orchestrator | 2025-04-04 01:48:37 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:48:37.860349 | orchestrator | 2025-04-04 01:48:37 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:37.863436 | orchestrator | 2025-04-04 01:48:37 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:40.942270 | orchestrator | 2025-04-04 01:48:40 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:40.945177 | orchestrator | 2025-04-04 01:48:40 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:40.948097 | orchestrator | 2025-04-04 01:48:40 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:40.951879 | orchestrator | 2025-04-04 01:48:40 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state STARTED 2025-04-04 01:48:40.953477 | orchestrator | 2025-04-04 01:48:40 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:40.953713 | orchestrator | 2025-04-04 01:48:40 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:44.026913 | orchestrator | 2025-04-04 01:48:44 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:44.027684 | orchestrator | 2025-04-04 01:48:44 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:44.028437 | orchestrator | 2025-04-04 01:48:44 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:44.028471 | orchestrator | 2025-04-04 01:48:44 | INFO  | Task 653a3b35-bc81-46b6-b788-78c30fba3d04 is in state SUCCESS 2025-04-04 01:48:44.031013 | orchestrator | 2025-04-04 01:48:44.031127 | orchestrator | 2025-04-04 01:48:44.031146 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-04-04 01:48:44.031163 | orchestrator | 2025-04-04 01:48:44.031178 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-04-04 01:48:44.031193 | orchestrator | Friday 04 April 2025 01:47:05 +0000 (0:00:00.617) 0:00:00.617 ********** 2025-04-04 01:48:44.031232 | orchestrator | ok: [testbed-manager] => { 2025-04-04 01:48:44.031249 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-04-04 01:48:44.031265 | orchestrator | } 2025-04-04 01:48:44.031280 | orchestrator | 2025-04-04 01:48:44.031294 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-04-04 01:48:44.031308 | orchestrator | Friday 04 April 2025 01:47:05 +0000 (0:00:00.563) 0:00:01.181 ********** 2025-04-04 01:48:44.031323 | orchestrator | ok: [testbed-manager] 2025-04-04 01:48:44.031338 | orchestrator | 2025-04-04 01:48:44.031352 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-04-04 01:48:44.031396 | orchestrator | Friday 04 April 2025 01:47:09 +0000 (0:00:03.217) 0:00:04.399 ********** 2025-04-04 01:48:44.031411 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-04-04 01:48:44.031425 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-04-04 01:48:44.031440 | orchestrator | 2025-04-04 01:48:44.031454 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-04-04 01:48:44.031468 | orchestrator | Friday 04 April 2025 01:47:10 +0000 (0:00:01.651) 0:00:06.050 ********** 2025-04-04 01:48:44.031482 | orchestrator | changed: [testbed-manager] 2025-04-04 01:48:44.031496 | orchestrator | 2025-04-04 01:48:44.031511 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-04-04 01:48:44.031527 | orchestrator | Friday 04 April 2025 01:47:17 +0000 (0:00:06.192) 0:00:12.243 ********** 2025-04-04 01:48:44.031543 | orchestrator | changed: [testbed-manager] 2025-04-04 01:48:44.031559 | orchestrator | 2025-04-04 01:48:44.031575 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-04-04 01:48:44.031591 | orchestrator | Friday 04 April 2025 01:47:19 +0000 (0:00:02.799) 0:00:15.043 ********** 2025-04-04 01:48:44.031607 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-04-04 01:48:44.031623 | orchestrator | ok: [testbed-manager] 2025-04-04 01:48:44.031640 | orchestrator | 2025-04-04 01:48:44.031656 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-04-04 01:48:44.031671 | orchestrator | Friday 04 April 2025 01:47:47 +0000 (0:00:27.386) 0:00:42.429 ********** 2025-04-04 01:48:44.031687 | orchestrator | changed: [testbed-manager] 2025-04-04 01:48:44.031703 | orchestrator | 2025-04-04 01:48:44.031719 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 01:48:44.031735 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:48:44.031754 | orchestrator | 2025-04-04 01:48:44.031771 | orchestrator | Friday 04 April 2025 01:47:49 +0000 (0:00:02.416) 0:00:44.846 ********** 2025-04-04 01:48:44.031787 | orchestrator | =============================================================================== 2025-04-04 01:48:44.031803 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.39s 2025-04-04 01:48:44.031819 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 6.19s 2025-04-04 01:48:44.031834 | orchestrator | osism.services.homer : Create traefik external network ------------------ 3.22s 2025-04-04 01:48:44.031850 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.80s 2025-04-04 01:48:44.031866 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.42s 2025-04-04 01:48:44.031890 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.65s 2025-04-04 01:48:44.031905 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.56s 2025-04-04 01:48:44.031919 | orchestrator | 2025-04-04 01:48:44.031933 | orchestrator | 2025-04-04 01:48:44.031947 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-04-04 01:48:44.031961 | orchestrator | 2025-04-04 01:48:44.031975 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-04-04 01:48:44.031989 | orchestrator | Friday 04 April 2025 01:47:03 +0000 (0:00:00.617) 0:00:00.617 ********** 2025-04-04 01:48:44.032010 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-04-04 01:48:44.032026 | orchestrator | 2025-04-04 01:48:44.032040 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-04-04 01:48:44.032054 | orchestrator | Friday 04 April 2025 01:47:04 +0000 (0:00:00.414) 0:00:01.031 ********** 2025-04-04 01:48:44.032069 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-04-04 01:48:44.032083 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-04-04 01:48:44.032097 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-04-04 01:48:44.032111 | orchestrator | 2025-04-04 01:48:44.032125 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-04-04 01:48:44.032139 | orchestrator | Friday 04 April 2025 01:47:06 +0000 (0:00:02.823) 0:00:03.855 ********** 2025-04-04 01:48:44.032153 | orchestrator | changed: [testbed-manager] 2025-04-04 01:48:44.032167 | orchestrator | 2025-04-04 01:48:44.032181 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-04-04 01:48:44.032195 | orchestrator | Friday 04 April 2025 01:47:09 +0000 (0:00:02.796) 0:00:06.652 ********** 2025-04-04 01:48:44.032209 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-04-04 01:48:44.032224 | orchestrator | ok: [testbed-manager] 2025-04-04 01:48:44.032238 | orchestrator | 2025-04-04 01:48:44.032264 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-04-04 01:48:44.032279 | orchestrator | Friday 04 April 2025 01:48:06 +0000 (0:00:57.001) 0:01:03.654 ********** 2025-04-04 01:48:44.032294 | orchestrator | changed: [testbed-manager] 2025-04-04 01:48:44.032308 | orchestrator | 2025-04-04 01:48:44.032322 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-04-04 01:48:44.032336 | orchestrator | Friday 04 April 2025 01:48:11 +0000 (0:00:04.260) 0:01:07.914 ********** 2025-04-04 01:48:44.032350 | orchestrator | ok: [testbed-manager] 2025-04-04 01:48:44.032381 | orchestrator | 2025-04-04 01:48:44.032395 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-04-04 01:48:44.032409 | orchestrator | Friday 04 April 2025 01:48:14 +0000 (0:00:03.316) 0:01:11.230 ********** 2025-04-04 01:48:44.032423 | orchestrator | changed: [testbed-manager] 2025-04-04 01:48:44.032437 | orchestrator | 2025-04-04 01:48:44.032451 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-04-04 01:48:44.032465 | orchestrator | Friday 04 April 2025 01:48:18 +0000 (0:00:03.884) 0:01:15.115 ********** 2025-04-04 01:48:44.032479 | orchestrator | changed: [testbed-manager] 2025-04-04 01:48:44.032493 | orchestrator | 2025-04-04 01:48:44.032507 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-04-04 01:48:44.032521 | orchestrator | Friday 04 April 2025 01:48:20 +0000 (0:00:01.796) 0:01:16.911 ********** 2025-04-04 01:48:44.032535 | orchestrator | changed: [testbed-manager] 2025-04-04 01:48:44.032549 | orchestrator | 2025-04-04 01:48:44.032563 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-04-04 01:48:44.032577 | orchestrator | Friday 04 April 2025 01:48:21 +0000 (0:00:01.490) 0:01:18.402 ********** 2025-04-04 01:48:44.032591 | orchestrator | ok: [testbed-manager] 2025-04-04 01:48:44.032606 | orchestrator | 2025-04-04 01:48:44.032619 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 01:48:44.032634 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:48:44.032648 | orchestrator | 2025-04-04 01:48:44.032662 | orchestrator | Friday 04 April 2025 01:48:22 +0000 (0:00:00.682) 0:01:19.085 ********** 2025-04-04 01:48:44.032676 | orchestrator | =============================================================================== 2025-04-04 01:48:44.032690 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 57.00s 2025-04-04 01:48:44.032710 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 4.26s 2025-04-04 01:48:44.032724 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.88s 2025-04-04 01:48:44.032738 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 3.32s 2025-04-04 01:48:44.032752 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.82s 2025-04-04 01:48:44.032771 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.80s 2025-04-04 01:48:44.032786 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.80s 2025-04-04 01:48:44.032800 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.49s 2025-04-04 01:48:44.032814 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.68s 2025-04-04 01:48:44.032827 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.41s 2025-04-04 01:48:44.032841 | orchestrator | 2025-04-04 01:48:44.032855 | orchestrator | 2025-04-04 01:48:44.032869 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-04 01:48:44.032883 | orchestrator | 2025-04-04 01:48:44.032897 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-04 01:48:44.032911 | orchestrator | Friday 04 April 2025 01:47:04 +0000 (0:00:00.483) 0:00:00.483 ********** 2025-04-04 01:48:44.032925 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-04-04 01:48:44.032939 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-04-04 01:48:44.032952 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-04-04 01:48:44.032966 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-04-04 01:48:44.032980 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-04-04 01:48:44.032994 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-04-04 01:48:44.033008 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-04-04 01:48:44.033022 | orchestrator | 2025-04-04 01:48:44.033036 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-04-04 01:48:44.033050 | orchestrator | 2025-04-04 01:48:44.033064 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-04-04 01:48:44.033077 | orchestrator | Friday 04 April 2025 01:47:08 +0000 (0:00:04.346) 0:00:04.829 ********** 2025-04-04 01:48:44.033146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-04 01:48:44.033165 | orchestrator | 2025-04-04 01:48:44.033180 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-04-04 01:48:44.033194 | orchestrator | Friday 04 April 2025 01:47:12 +0000 (0:00:04.039) 0:00:08.868 ********** 2025-04-04 01:48:44.033208 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:48:44.033222 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:48:44.033236 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:48:44.033250 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:48:44.033264 | orchestrator | ok: [testbed-manager] 2025-04-04 01:48:44.033278 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:48:44.033292 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:48:44.033306 | orchestrator | 2025-04-04 01:48:44.033320 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-04-04 01:48:44.033341 | orchestrator | Friday 04 April 2025 01:47:17 +0000 (0:00:04.401) 0:00:13.270 ********** 2025-04-04 01:48:44.033374 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:48:44.033388 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:48:44.033403 | orchestrator | ok: [testbed-manager] 2025-04-04 01:48:44.033416 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:48:44.033430 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:48:44.033444 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:48:44.033465 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:48:44.033479 | orchestrator | 2025-04-04 01:48:44.033493 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-04-04 01:48:44.033507 | orchestrator | Friday 04 April 2025 01:47:22 +0000 (0:00:05.137) 0:00:18.407 ********** 2025-04-04 01:48:44.033521 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:48:44.033546 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:48:44.033560 | orchestrator | changed: [testbed-manager] 2025-04-04 01:48:44.033574 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:48:44.033588 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:48:44.033602 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:48:44.033616 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:48:44.033630 | orchestrator | 2025-04-04 01:48:44.033644 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-04-04 01:48:44.033658 | orchestrator | Friday 04 April 2025 01:47:25 +0000 (0:00:03.419) 0:00:21.827 ********** 2025-04-04 01:48:44.033672 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:48:44.033686 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:48:44.033700 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:48:44.033714 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:48:44.033728 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:48:44.033741 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:48:44.033756 | orchestrator | changed: [testbed-manager] 2025-04-04 01:48:44.033770 | orchestrator | 2025-04-04 01:48:44.033784 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-04-04 01:48:44.033798 | orchestrator | Friday 04 April 2025 01:47:37 +0000 (0:00:11.273) 0:00:33.100 ********** 2025-04-04 01:48:44.033812 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:48:44.033826 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:48:44.033839 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:48:44.033853 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:48:44.033867 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:48:44.033881 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:48:44.033895 | orchestrator | changed: [testbed-manager] 2025-04-04 01:48:44.033909 | orchestrator | 2025-04-04 01:48:44.033923 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-04-04 01:48:44.033938 | orchestrator | Friday 04 April 2025 01:47:59 +0000 (0:00:22.907) 0:00:56.008 ********** 2025-04-04 01:48:44.033953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-04 01:48:44.033972 | orchestrator | 2025-04-04 01:48:44.033987 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-04-04 01:48:44.034000 | orchestrator | Friday 04 April 2025 01:48:03 +0000 (0:00:03.282) 0:00:59.290 ********** 2025-04-04 01:48:44.034066 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-04-04 01:48:44.034085 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-04-04 01:48:44.034100 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-04-04 01:48:44.034114 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-04-04 01:48:44.034129 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-04-04 01:48:44.034142 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-04-04 01:48:44.034156 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-04-04 01:48:44.034170 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-04-04 01:48:44.034184 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-04-04 01:48:44.034198 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-04-04 01:48:44.034212 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-04-04 01:48:44.034226 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-04-04 01:48:44.034240 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-04-04 01:48:44.034261 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-04-04 01:48:44.034275 | orchestrator | 2025-04-04 01:48:44.034289 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-04-04 01:48:44.034303 | orchestrator | Friday 04 April 2025 01:48:15 +0000 (0:00:12.258) 0:01:11.548 ********** 2025-04-04 01:48:44.034317 | orchestrator | ok: [testbed-manager] 2025-04-04 01:48:44.034332 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:48:44.034345 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:48:44.034388 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:48:44.034404 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:48:44.034418 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:48:44.034431 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:48:44.034445 | orchestrator | 2025-04-04 01:48:44.034459 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-04-04 01:48:44.034473 | orchestrator | Friday 04 April 2025 01:48:19 +0000 (0:00:03.606) 0:01:15.155 ********** 2025-04-04 01:48:44.034487 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:48:44.034501 | orchestrator | changed: [testbed-manager] 2025-04-04 01:48:44.034515 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:48:44.034529 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:48:44.034543 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:48:44.034557 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:48:44.034571 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:48:44.034585 | orchestrator | 2025-04-04 01:48:44.034599 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-04-04 01:48:44.034613 | orchestrator | Friday 04 April 2025 01:48:23 +0000 (0:00:04.877) 0:01:20.033 ********** 2025-04-04 01:48:44.034627 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:48:44.034641 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:48:44.034655 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:48:44.034668 | orchestrator | ok: [testbed-manager] 2025-04-04 01:48:44.034690 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:48:44.034705 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:48:44.034719 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:48:44.034733 | orchestrator | 2025-04-04 01:48:44.034747 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-04-04 01:48:44.034767 | orchestrator | Friday 04 April 2025 01:48:26 +0000 (0:00:02.961) 0:01:22.994 ********** 2025-04-04 01:48:44.034782 | orchestrator | ok: [testbed-manager] 2025-04-04 01:48:44.034796 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:48:44.034810 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:48:44.034824 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:48:44.034838 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:48:44.034851 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:48:44.034865 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:48:44.034879 | orchestrator | 2025-04-04 01:48:44.034893 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-04-04 01:48:44.034908 | orchestrator | Friday 04 April 2025 01:48:30 +0000 (0:00:03.548) 0:01:26.542 ********** 2025-04-04 01:48:44.034922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-04-04 01:48:44.034938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-04 01:48:44.034953 | orchestrator | 2025-04-04 01:48:44.034967 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-04-04 01:48:44.034981 | orchestrator | Friday 04 April 2025 01:48:32 +0000 (0:00:02.295) 0:01:28.838 ********** 2025-04-04 01:48:44.034995 | orchestrator | changed: [testbed-manager] 2025-04-04 01:48:44.035010 | orchestrator | 2025-04-04 01:48:44.035023 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-04-04 01:48:44.035038 | orchestrator | Friday 04 April 2025 01:48:36 +0000 (0:00:03.642) 0:01:32.481 ********** 2025-04-04 01:48:44.035058 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:48:44.035073 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:48:44.035088 | orchestrator | changed: [testbed-manager] 2025-04-04 01:48:44.035110 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:48:44.035125 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:48:44.035140 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:48:44.035154 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:48:44.035168 | orchestrator | 2025-04-04 01:48:44.035183 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 01:48:44.035197 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:48:44.035211 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:48:44.035225 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:48:44.035246 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:48:44.035260 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:48:44.035275 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:48:44.035289 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:48:44.035303 | orchestrator | 2025-04-04 01:48:44.035317 | orchestrator | Friday 04 April 2025 01:48:39 +0000 (0:00:03.562) 0:01:36.043 ********** 2025-04-04 01:48:44.035331 | orchestrator | =============================================================================== 2025-04-04 01:48:44.035345 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 22.91s 2025-04-04 01:48:44.035379 | orchestrator | osism.services.netdata : Copy configuration files ---------------------- 12.26s 2025-04-04 01:48:44.035393 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.27s 2025-04-04 01:48:44.035408 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 5.13s 2025-04-04 01:48:44.035422 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 4.88s 2025-04-04 01:48:44.035436 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 4.41s 2025-04-04 01:48:44.035450 | orchestrator | Group hosts based on enabled services ----------------------------------- 4.34s 2025-04-04 01:48:44.035464 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 4.04s 2025-04-04 01:48:44.035477 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.64s 2025-04-04 01:48:44.035492 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 3.61s 2025-04-04 01:48:44.035506 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.56s 2025-04-04 01:48:44.035520 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.55s 2025-04-04 01:48:44.035534 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.42s 2025-04-04 01:48:44.035548 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 3.28s 2025-04-04 01:48:44.035568 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.96s 2025-04-04 01:48:47.145874 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.30s 2025-04-04 01:48:47.146084 | orchestrator | 2025-04-04 01:48:44 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:47.146108 | orchestrator | 2025-04-04 01:48:44 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:47.146192 | orchestrator | 2025-04-04 01:48:47 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:47.148516 | orchestrator | 2025-04-04 01:48:47 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:47.148545 | orchestrator | 2025-04-04 01:48:47 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:47.148567 | orchestrator | 2025-04-04 01:48:47 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:50.194729 | orchestrator | 2025-04-04 01:48:47 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:50.194885 | orchestrator | 2025-04-04 01:48:50 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:50.202826 | orchestrator | 2025-04-04 01:48:50 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:50.209752 | orchestrator | 2025-04-04 01:48:50 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:53.275779 | orchestrator | 2025-04-04 01:48:50 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:53.275906 | orchestrator | 2025-04-04 01:48:50 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:53.275941 | orchestrator | 2025-04-04 01:48:53 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:53.276507 | orchestrator | 2025-04-04 01:48:53 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:53.279382 | orchestrator | 2025-04-04 01:48:53 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state STARTED 2025-04-04 01:48:56.324233 | orchestrator | 2025-04-04 01:48:53 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:56.324393 | orchestrator | 2025-04-04 01:48:53 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:56.324433 | orchestrator | 2025-04-04 01:48:56 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:56.324520 | orchestrator | 2025-04-04 01:48:56 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:56.326391 | orchestrator | 2025-04-04 01:48:56 | INFO  | Task b36de616-88a7-4876-a066-5a28aec9b8ca is in state SUCCESS 2025-04-04 01:48:56.327227 | orchestrator | 2025-04-04 01:48:56 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:59.380842 | orchestrator | 2025-04-04 01:48:56 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:48:59.380988 | orchestrator | 2025-04-04 01:48:59 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:48:59.384352 | orchestrator | 2025-04-04 01:48:59 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:48:59.384967 | orchestrator | 2025-04-04 01:48:59 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:48:59.385254 | orchestrator | 2025-04-04 01:48:59 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:02.439847 | orchestrator | 2025-04-04 01:49:02 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:02.445925 | orchestrator | 2025-04-04 01:49:02 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:02.446173 | orchestrator | 2025-04-04 01:49:02 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:05.492838 | orchestrator | 2025-04-04 01:49:02 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:05.492992 | orchestrator | 2025-04-04 01:49:05 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:05.494506 | orchestrator | 2025-04-04 01:49:05 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:05.495551 | orchestrator | 2025-04-04 01:49:05 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:05.495741 | orchestrator | 2025-04-04 01:49:05 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:08.569769 | orchestrator | 2025-04-04 01:49:08 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:08.569942 | orchestrator | 2025-04-04 01:49:08 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:08.571765 | orchestrator | 2025-04-04 01:49:08 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:11.630084 | orchestrator | 2025-04-04 01:49:08 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:11.630251 | orchestrator | 2025-04-04 01:49:11 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:11.632855 | orchestrator | 2025-04-04 01:49:11 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:11.632926 | orchestrator | 2025-04-04 01:49:11 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:11.632948 | orchestrator | 2025-04-04 01:49:11 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:14.680899 | orchestrator | 2025-04-04 01:49:14 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:14.681094 | orchestrator | 2025-04-04 01:49:14 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:14.682869 | orchestrator | 2025-04-04 01:49:14 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:17.769720 | orchestrator | 2025-04-04 01:49:14 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:17.769871 | orchestrator | 2025-04-04 01:49:17 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:17.770732 | orchestrator | 2025-04-04 01:49:17 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:17.773980 | orchestrator | 2025-04-04 01:49:17 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:20.824825 | orchestrator | 2025-04-04 01:49:17 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:20.824959 | orchestrator | 2025-04-04 01:49:20 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:20.825727 | orchestrator | 2025-04-04 01:49:20 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:20.825758 | orchestrator | 2025-04-04 01:49:20 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:20.825911 | orchestrator | 2025-04-04 01:49:20 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:23.875153 | orchestrator | 2025-04-04 01:49:23 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:23.875347 | orchestrator | 2025-04-04 01:49:23 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:23.876562 | orchestrator | 2025-04-04 01:49:23 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:26.935814 | orchestrator | 2025-04-04 01:49:23 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:26.935953 | orchestrator | 2025-04-04 01:49:26 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:26.937575 | orchestrator | 2025-04-04 01:49:26 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:26.939462 | orchestrator | 2025-04-04 01:49:26 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:26.940004 | orchestrator | 2025-04-04 01:49:26 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:30.005553 | orchestrator | 2025-04-04 01:49:30 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:30.007300 | orchestrator | 2025-04-04 01:49:30 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:30.007338 | orchestrator | 2025-04-04 01:49:30 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:33.067261 | orchestrator | 2025-04-04 01:49:30 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:33.067446 | orchestrator | 2025-04-04 01:49:33 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:33.069055 | orchestrator | 2025-04-04 01:49:33 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:33.070910 | orchestrator | 2025-04-04 01:49:33 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:36.123072 | orchestrator | 2025-04-04 01:49:33 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:36.123207 | orchestrator | 2025-04-04 01:49:36 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:36.126012 | orchestrator | 2025-04-04 01:49:36 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:36.128395 | orchestrator | 2025-04-04 01:49:36 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:39.173860 | orchestrator | 2025-04-04 01:49:36 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:39.173999 | orchestrator | 2025-04-04 01:49:39 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:39.176404 | orchestrator | 2025-04-04 01:49:39 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:39.177213 | orchestrator | 2025-04-04 01:49:39 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:42.242777 | orchestrator | 2025-04-04 01:49:39 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:42.242935 | orchestrator | 2025-04-04 01:49:42 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:42.243319 | orchestrator | 2025-04-04 01:49:42 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:42.244600 | orchestrator | 2025-04-04 01:49:42 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:45.300108 | orchestrator | 2025-04-04 01:49:42 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:45.300259 | orchestrator | 2025-04-04 01:49:45 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:45.300590 | orchestrator | 2025-04-04 01:49:45 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:45.303131 | orchestrator | 2025-04-04 01:49:45 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:48.356672 | orchestrator | 2025-04-04 01:49:45 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:48.356817 | orchestrator | 2025-04-04 01:49:48 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:48.358126 | orchestrator | 2025-04-04 01:49:48 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:48.359620 | orchestrator | 2025-04-04 01:49:48 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:51.419956 | orchestrator | 2025-04-04 01:49:48 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:51.420102 | orchestrator | 2025-04-04 01:49:51 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:51.420183 | orchestrator | 2025-04-04 01:49:51 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:51.421784 | orchestrator | 2025-04-04 01:49:51 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:54.485051 | orchestrator | 2025-04-04 01:49:51 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:54.485209 | orchestrator | 2025-04-04 01:49:54 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:54.486951 | orchestrator | 2025-04-04 01:49:54 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:54.486989 | orchestrator | 2025-04-04 01:49:54 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:49:57.554410 | orchestrator | 2025-04-04 01:49:54 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:49:57.554566 | orchestrator | 2025-04-04 01:49:57 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:49:57.557018 | orchestrator | 2025-04-04 01:49:57 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:49:57.558392 | orchestrator | 2025-04-04 01:49:57 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:00.607858 | orchestrator | 2025-04-04 01:49:57 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:00.607990 | orchestrator | 2025-04-04 01:50:00 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:00.608581 | orchestrator | 2025-04-04 01:50:00 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:50:00.609929 | orchestrator | 2025-04-04 01:50:00 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:03.666336 | orchestrator | 2025-04-04 01:50:00 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:03.666544 | orchestrator | 2025-04-04 01:50:03 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:03.667644 | orchestrator | 2025-04-04 01:50:03 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state STARTED 2025-04-04 01:50:03.669411 | orchestrator | 2025-04-04 01:50:03 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:06.725740 | orchestrator | 2025-04-04 01:50:03 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:06.725859 | orchestrator | 2025-04-04 01:50:06 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:06.731297 | orchestrator | 2025-04-04 01:50:06.731328 | orchestrator | 2025-04-04 01:50:06.731341 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-04-04 01:50:06.731368 | orchestrator | 2025-04-04 01:50:06.731380 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-04-04 01:50:06.731390 | orchestrator | Friday 04 April 2025 01:47:33 +0000 (0:00:00.317) 0:00:00.317 ********** 2025-04-04 01:50:06.731400 | orchestrator | ok: [testbed-manager] 2025-04-04 01:50:06.731412 | orchestrator | 2025-04-04 01:50:06.731423 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-04-04 01:50:06.731433 | orchestrator | Friday 04 April 2025 01:47:35 +0000 (0:00:01.560) 0:00:01.879 ********** 2025-04-04 01:50:06.731468 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-04-04 01:50:06.731480 | orchestrator | 2025-04-04 01:50:06.731490 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-04-04 01:50:06.731500 | orchestrator | Friday 04 April 2025 01:47:36 +0000 (0:00:01.617) 0:00:03.497 ********** 2025-04-04 01:50:06.731510 | orchestrator | changed: [testbed-manager] 2025-04-04 01:50:06.731522 | orchestrator | 2025-04-04 01:50:06.731545 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-04-04 01:50:06.731556 | orchestrator | Friday 04 April 2025 01:47:39 +0000 (0:00:03.105) 0:00:06.603 ********** 2025-04-04 01:50:06.731566 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-04-04 01:50:06.731577 | orchestrator | ok: [testbed-manager] 2025-04-04 01:50:06.731587 | orchestrator | 2025-04-04 01:50:06.731597 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-04-04 01:50:06.731607 | orchestrator | Friday 04 April 2025 01:48:47 +0000 (0:01:07.996) 0:01:14.599 ********** 2025-04-04 01:50:06.731618 | orchestrator | changed: [testbed-manager] 2025-04-04 01:50:06.731628 | orchestrator | 2025-04-04 01:50:06.731638 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 01:50:06.731648 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:50:06.731660 | orchestrator | 2025-04-04 01:50:06.731670 | orchestrator | Friday 04 April 2025 01:48:51 +0000 (0:00:03.904) 0:01:18.503 ********** 2025-04-04 01:50:06.731680 | orchestrator | =============================================================================== 2025-04-04 01:50:06.731690 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 68.00s 2025-04-04 01:50:06.731701 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.90s 2025-04-04 01:50:06.731711 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 3.11s 2025-04-04 01:50:06.731721 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.62s 2025-04-04 01:50:06.731732 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.56s 2025-04-04 01:50:06.731742 | orchestrator | 2025-04-04 01:50:06.731756 | orchestrator | 2025-04-04 01:50:06 | INFO  | Task ec99d496-a028-4f88-ac8c-56fe3b93d270 is in state SUCCESS 2025-04-04 01:50:06.733480 | orchestrator | 2025-04-04 01:50:06.733517 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-04-04 01:50:06.733528 | orchestrator | 2025-04-04 01:50:06.733538 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-04 01:50:06.733549 | orchestrator | Friday 04 April 2025 01:46:57 +0000 (0:00:00.608) 0:00:00.608 ********** 2025-04-04 01:50:06.733559 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-04 01:50:06.733571 | orchestrator | 2025-04-04 01:50:06.733581 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-04-04 01:50:06.733591 | orchestrator | Friday 04 April 2025 01:47:00 +0000 (0:00:02.293) 0:00:02.902 ********** 2025-04-04 01:50:06.733601 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-04 01:50:06.733612 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-04 01:50:06.733622 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-04 01:50:06.733632 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-04 01:50:06.733643 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-04 01:50:06.733653 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-04 01:50:06.733663 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-04 01:50:06.733684 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-04 01:50:06.733696 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-04 01:50:06.733706 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-04 01:50:06.733716 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-04 01:50:06.733726 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-04 01:50:06.733737 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-04 01:50:06.733747 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-04 01:50:06.733757 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-04 01:50:06.733767 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-04 01:50:06.733778 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-04 01:50:06.733788 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-04 01:50:06.733798 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-04 01:50:06.733808 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-04 01:50:06.733819 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-04 01:50:06.733829 | orchestrator | 2025-04-04 01:50:06.733843 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-04 01:50:06.733854 | orchestrator | Friday 04 April 2025 01:47:06 +0000 (0:00:06.275) 0:00:09.178 ********** 2025-04-04 01:50:06.733864 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-04 01:50:06.733880 | orchestrator | 2025-04-04 01:50:06.733890 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-04-04 01:50:06.733900 | orchestrator | Friday 04 April 2025 01:47:09 +0000 (0:00:03.519) 0:00:12.697 ********** 2025-04-04 01:50:06.733913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.733926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.733945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.733961 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.733972 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.733983 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.733994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.734005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.734065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.734087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.734109 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.734122 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.734134 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.734150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.734163 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.734176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.734199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.734216 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.734228 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.734240 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.734252 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.734263 | orchestrator | 2025-04-04 01:50:06.734276 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-04-04 01:50:06.734287 | orchestrator | Friday 04 April 2025 01:47:16 +0000 (0:00:07.114) 0:00:19.812 ********** 2025-04-04 01:50:06.734299 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-04 01:50:06.734311 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734327 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-04 01:50:06.734382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734407 | orchestrator | skipping: [testbed-manager] 2025-04-04 01:50:06.734420 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:50:06.734432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-04 01:50:06.734443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734465 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:50:06.734475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-04 01:50:06.734486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734520 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:50:06.734531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-04 01:50:06.734541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-04 01:50:06.734552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-04 01:50:06.734575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734601 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:50:06.734620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734653 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:50:06.734664 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:50:06.734674 | orchestrator | 2025-04-04 01:50:06.734684 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-04-04 01:50:06.734695 | orchestrator | Friday 04 April 2025 01:47:20 +0000 (0:00:03.080) 0:00:22.892 ********** 2025-04-04 01:50:06.734705 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-04 01:50:06.734716 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734727 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.734742 | orchestrator | skipping: [testbed-manager] 2025-04-04 01:50:06.734752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-04 01:50:06.734769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.735227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.735330 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:50:06.735378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-04 01:50:06.735396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.735410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.735425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-04 01:50:06.735460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.735474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.735504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-04 01:50:06.735524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.735537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.735550 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:50:06.735563 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:50:06.735576 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:50:06.735589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-04 01:50:06.735602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.735621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.735634 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:50:06.735646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-04 01:50:06.735668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.735682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.735695 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:50:06.735708 | orchestrator | 2025-04-04 01:50:06.735721 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-04-04 01:50:06.735734 | orchestrator | Friday 04 April 2025 01:47:25 +0000 (0:00:05.140) 0:00:28.033 ********** 2025-04-04 01:50:06.735747 | orchestrator | skipping: [testbed-manager] 2025-04-04 01:50:06.735759 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:50:06.735771 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:50:06.735784 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:50:06.735796 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:50:06.735809 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:50:06.735822 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:50:06.735834 | orchestrator | 2025-04-04 01:50:06.735847 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-04-04 01:50:06.735859 | orchestrator | Friday 04 April 2025 01:47:27 +0000 (0:00:02.026) 0:00:30.059 ********** 2025-04-04 01:50:06.735872 | orchestrator | skipping: [testbed-manager] 2025-04-04 01:50:06.735885 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:50:06.735897 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:50:06.735909 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:50:06.735922 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:50:06.735934 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:50:06.735946 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:50:06.735958 | orchestrator | 2025-04-04 01:50:06.735971 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-04-04 01:50:06.735983 | orchestrator | Friday 04 April 2025 01:47:29 +0000 (0:00:02.057) 0:00:32.116 ********** 2025-04-04 01:50:06.736001 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:50:06.736015 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:50:06.736027 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:50:06.736040 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:50:06.736052 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:50:06.736064 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:50:06.736077 | orchestrator | changed: [testbed-manager] 2025-04-04 01:50:06.736089 | orchestrator | 2025-04-04 01:50:06.736102 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-04-04 01:50:06.736114 | orchestrator | Friday 04 April 2025 01:48:20 +0000 (0:00:51.463) 0:01:23.580 ********** 2025-04-04 01:50:06.736127 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:50:06.736139 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:50:06.736152 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:50:06.736164 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:50:06.736177 | orchestrator | ok: [testbed-manager] 2025-04-04 01:50:06.736189 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:50:06.736201 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:50:06.736218 | orchestrator | 2025-04-04 01:50:06.736231 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-04 01:50:06.736244 | orchestrator | Friday 04 April 2025 01:48:24 +0000 (0:00:04.275) 0:01:27.855 ********** 2025-04-04 01:50:06.736257 | orchestrator | ok: [testbed-manager] 2025-04-04 01:50:06.736269 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:50:06.736282 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:50:06.736295 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:50:06.736307 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:50:06.736319 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:50:06.736332 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:50:06.736344 | orchestrator | 2025-04-04 01:50:06.736371 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-04-04 01:50:06.736384 | orchestrator | Friday 04 April 2025 01:48:27 +0000 (0:00:02.245) 0:01:30.101 ********** 2025-04-04 01:50:06.736397 | orchestrator | skipping: [testbed-manager] 2025-04-04 01:50:06.736410 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:50:06.736422 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:50:06.736434 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:50:06.736447 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:50:06.736459 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:50:06.736472 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:50:06.736484 | orchestrator | 2025-04-04 01:50:06.736497 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-04 01:50:06.736509 | orchestrator | Friday 04 April 2025 01:48:29 +0000 (0:00:01.934) 0:01:32.036 ********** 2025-04-04 01:50:06.736522 | orchestrator | skipping: [testbed-manager] 2025-04-04 01:50:06.736534 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:50:06.736546 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:50:06.736559 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:50:06.736571 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:50:06.736584 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:50:06.736596 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:50:06.736609 | orchestrator | 2025-04-04 01:50:06.736621 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-04-04 01:50:06.736633 | orchestrator | Friday 04 April 2025 01:48:30 +0000 (0:00:01.261) 0:01:33.298 ********** 2025-04-04 01:50:06.736654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.736677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.736691 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.736705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.736718 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.736731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.736744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.736757 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.736786 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.736800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.736813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.736826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.736839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.736852 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.736865 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.736889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.736907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.736924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.736938 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.736950 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.736963 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.736976 | orchestrator | 2025-04-04 01:50:06.736989 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-04-04 01:50:06.737001 | orchestrator | Friday 04 April 2025 01:48:36 +0000 (0:00:06.502) 0:01:39.801 ********** 2025-04-04 01:50:06.737014 | orchestrator | [WARNING]: Skipped 2025-04-04 01:50:06.737027 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-04-04 01:50:06.737039 | orchestrator | to this access issue: 2025-04-04 01:50:06.737052 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-04-04 01:50:06.737065 | orchestrator | directory 2025-04-04 01:50:06.737077 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-04 01:50:06.737090 | orchestrator | 2025-04-04 01:50:06.737102 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-04-04 01:50:06.737115 | orchestrator | Friday 04 April 2025 01:48:38 +0000 (0:00:01.405) 0:01:41.206 ********** 2025-04-04 01:50:06.737138 | orchestrator | [WARNING]: Skipped 2025-04-04 01:50:06.737158 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-04-04 01:50:06.737170 | orchestrator | to this access issue: 2025-04-04 01:50:06.737183 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-04-04 01:50:06.737195 | orchestrator | directory 2025-04-04 01:50:06.737208 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-04 01:50:06.737220 | orchestrator | 2025-04-04 01:50:06.737236 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-04-04 01:50:06.737249 | orchestrator | Friday 04 April 2025 01:48:39 +0000 (0:00:01.498) 0:01:42.704 ********** 2025-04-04 01:50:06.737261 | orchestrator | [WARNING]: Skipped 2025-04-04 01:50:06.737274 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-04-04 01:50:06.737286 | orchestrator | to this access issue: 2025-04-04 01:50:06.737299 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-04-04 01:50:06.737311 | orchestrator | directory 2025-04-04 01:50:06.737323 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-04 01:50:06.737336 | orchestrator | 2025-04-04 01:50:06.737348 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-04-04 01:50:06.737431 | orchestrator | Friday 04 April 2025 01:48:40 +0000 (0:00:00.779) 0:01:43.484 ********** 2025-04-04 01:50:06.737445 | orchestrator | [WARNING]: Skipped 2025-04-04 01:50:06.737457 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-04-04 01:50:06.737470 | orchestrator | to this access issue: 2025-04-04 01:50:06.737483 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-04-04 01:50:06.737495 | orchestrator | directory 2025-04-04 01:50:06.737508 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-04 01:50:06.737520 | orchestrator | 2025-04-04 01:50:06.737533 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-04-04 01:50:06.737545 | orchestrator | Friday 04 April 2025 01:48:41 +0000 (0:00:00.702) 0:01:44.186 ********** 2025-04-04 01:50:06.737557 | orchestrator | changed: [testbed-manager] 2025-04-04 01:50:06.737570 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:50:06.737582 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:50:06.737595 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:50:06.737607 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:50:06.737619 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:50:06.737632 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:50:06.737644 | orchestrator | 2025-04-04 01:50:06.737657 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-04-04 01:50:06.737669 | orchestrator | Friday 04 April 2025 01:48:48 +0000 (0:00:06.880) 0:01:51.067 ********** 2025-04-04 01:50:06.737681 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-04 01:50:06.737694 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-04 01:50:06.737706 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-04 01:50:06.737719 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-04 01:50:06.737731 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-04 01:50:06.737744 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-04 01:50:06.737756 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-04 01:50:06.737769 | orchestrator | 2025-04-04 01:50:06.737781 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-04-04 01:50:06.737793 | orchestrator | Friday 04 April 2025 01:48:52 +0000 (0:00:03.865) 0:01:54.932 ********** 2025-04-04 01:50:06.737813 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:50:06.737826 | orchestrator | changed: [testbed-manager] 2025-04-04 01:50:06.737839 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:50:06.737851 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:50:06.737863 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:50:06.737876 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:50:06.737888 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:50:06.737901 | orchestrator | 2025-04-04 01:50:06.737913 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-04-04 01:50:06.737925 | orchestrator | Friday 04 April 2025 01:48:55 +0000 (0:00:03.669) 0:01:58.601 ********** 2025-04-04 01:50:06.737939 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.737952 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.737966 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.737985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.738000 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738123 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738149 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.738207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.738221 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738236 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.738256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.738270 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.738283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.738302 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.738315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.738328 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738341 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.738371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:50:06.738392 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738410 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738423 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738441 | orchestrator | 2025-04-04 01:50:06.738454 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-04-04 01:50:06.738467 | orchestrator | Friday 04 April 2025 01:48:58 +0000 (0:00:03.073) 0:02:01.675 ********** 2025-04-04 01:50:06.738479 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-04 01:50:06.738492 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-04 01:50:06.738504 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-04 01:50:06.738517 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-04 01:50:06.738529 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-04 01:50:06.738541 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-04 01:50:06.738553 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-04 01:50:06.738566 | orchestrator | 2025-04-04 01:50:06.738578 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-04-04 01:50:06.738590 | orchestrator | Friday 04 April 2025 01:49:02 +0000 (0:00:03.431) 0:02:05.107 ********** 2025-04-04 01:50:06.738602 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-04 01:50:06.738615 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-04 01:50:06.738627 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-04 01:50:06.738640 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-04 01:50:06.738652 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-04 01:50:06.738665 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-04 01:50:06.738677 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-04 01:50:06.738689 | orchestrator | 2025-04-04 01:50:06.738701 | orchestrator | TASK [common : Check common containers] **************************************** 2025-04-04 01:50:06.738714 | orchestrator | Friday 04 April 2025 01:49:05 +0000 (0:00:02.972) 0:02:08.079 ********** 2025-04-04 01:50:06.738726 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.738739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.738758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.738777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.738790 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738803 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.738816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738861 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738880 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.738893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738944 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-04 01:50:06.738957 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.738982 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.739004 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.739017 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.739030 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:50:06.739043 | orchestrator | 2025-04-04 01:50:06.739055 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-04-04 01:50:06.739068 | orchestrator | Friday 04 April 2025 01:49:09 +0000 (0:00:04.551) 0:02:12.631 ********** 2025-04-04 01:50:06.739080 | orchestrator | changed: [testbed-manager] 2025-04-04 01:50:06.739093 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:50:06.739105 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:50:06.739118 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:50:06.739130 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:50:06.739142 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:50:06.739154 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:50:06.739166 | orchestrator | 2025-04-04 01:50:06.739183 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-04-04 01:50:06.739196 | orchestrator | Friday 04 April 2025 01:49:11 +0000 (0:00:01.869) 0:02:14.500 ********** 2025-04-04 01:50:06.739208 | orchestrator | changed: [testbed-manager] 2025-04-04 01:50:06.739224 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:50:06.739237 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:50:06.739249 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:50:06.739261 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:50:06.739274 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:50:06.739286 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:50:06.739298 | orchestrator | 2025-04-04 01:50:06.739311 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-04 01:50:06.739323 | orchestrator | Friday 04 April 2025 01:49:13 +0000 (0:00:01.662) 0:02:16.162 ********** 2025-04-04 01:50:06.739335 | orchestrator | 2025-04-04 01:50:06.739348 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-04 01:50:06.739375 | orchestrator | Friday 04 April 2025 01:49:13 +0000 (0:00:00.085) 0:02:16.248 ********** 2025-04-04 01:50:06.739393 | orchestrator | 2025-04-04 01:50:06.739406 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-04 01:50:06.739418 | orchestrator | Friday 04 April 2025 01:49:13 +0000 (0:00:00.095) 0:02:16.343 ********** 2025-04-04 01:50:06.739430 | orchestrator | 2025-04-04 01:50:06.739443 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-04 01:50:06.739455 | orchestrator | Friday 04 April 2025 01:49:13 +0000 (0:00:00.085) 0:02:16.429 ********** 2025-04-04 01:50:06.739467 | orchestrator | 2025-04-04 01:50:06.739480 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-04 01:50:06.739492 | orchestrator | Friday 04 April 2025 01:49:13 +0000 (0:00:00.353) 0:02:16.783 ********** 2025-04-04 01:50:06.739504 | orchestrator | 2025-04-04 01:50:06.739516 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-04 01:50:06.739529 | orchestrator | Friday 04 April 2025 01:49:14 +0000 (0:00:00.087) 0:02:16.871 ********** 2025-04-04 01:50:06.739541 | orchestrator | 2025-04-04 01:50:06.739553 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-04 01:50:06.739565 | orchestrator | Friday 04 April 2025 01:49:14 +0000 (0:00:00.060) 0:02:16.931 ********** 2025-04-04 01:50:06.739578 | orchestrator | 2025-04-04 01:50:06.739590 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-04-04 01:50:06.739602 | orchestrator | Friday 04 April 2025 01:49:14 +0000 (0:00:00.084) 0:02:17.016 ********** 2025-04-04 01:50:06.739615 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:50:06.739633 | orchestrator | changed: [testbed-manager] 2025-04-04 01:50:06.739646 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:50:06.739658 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:50:06.739670 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:50:06.739682 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:50:06.739695 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:50:06.739707 | orchestrator | 2025-04-04 01:50:06.739719 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-04-04 01:50:06.739731 | orchestrator | Friday 04 April 2025 01:49:23 +0000 (0:00:08.974) 0:02:25.991 ********** 2025-04-04 01:50:06.739744 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:50:06.739756 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:50:06.739768 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:50:06.739781 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:50:06.739793 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:50:06.739805 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:50:06.739817 | orchestrator | changed: [testbed-manager] 2025-04-04 01:50:06.739829 | orchestrator | 2025-04-04 01:50:06.739842 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-04-04 01:50:06.739854 | orchestrator | Friday 04 April 2025 01:49:51 +0000 (0:00:27.873) 0:02:53.864 ********** 2025-04-04 01:50:06.739867 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:50:06.739879 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:50:06.739891 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:50:06.739904 | orchestrator | ok: [testbed-manager] 2025-04-04 01:50:06.739916 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:50:06.739928 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:50:06.739940 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:50:06.739953 | orchestrator | 2025-04-04 01:50:06.739965 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-04-04 01:50:06.739978 | orchestrator | Friday 04 April 2025 01:49:53 +0000 (0:00:02.825) 0:02:56.689 ********** 2025-04-04 01:50:06.739990 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:50:06.740002 | orchestrator | changed: [testbed-manager] 2025-04-04 01:50:06.740014 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:50:06.740027 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:50:06.740039 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:50:06.740051 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:50:06.740063 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:50:06.740083 | orchestrator | 2025-04-04 01:50:06.740095 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 01:50:06.740109 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-04 01:50:06.740122 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-04 01:50:06.740135 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-04 01:50:06.740148 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-04 01:50:06.740160 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-04 01:50:06.740172 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-04 01:50:06.740184 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-04 01:50:06.740197 | orchestrator | 2025-04-04 01:50:06.740209 | orchestrator | 2025-04-04 01:50:06.740221 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-04 01:50:06.740234 | orchestrator | Friday 04 April 2025 01:50:04 +0000 (0:00:11.011) 0:03:07.701 ********** 2025-04-04 01:50:06.740246 | orchestrator | =============================================================================== 2025-04-04 01:50:06.740258 | orchestrator | common : Ensure fluentd image is present for label check --------------- 51.46s 2025-04-04 01:50:06.740270 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 27.87s 2025-04-04 01:50:06.740283 | orchestrator | common : Restart cron container ---------------------------------------- 11.01s 2025-04-04 01:50:06.740295 | orchestrator | common : Restart fluentd container -------------------------------------- 8.97s 2025-04-04 01:50:06.740311 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 7.11s 2025-04-04 01:50:06.740324 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 6.88s 2025-04-04 01:50:06.740336 | orchestrator | common : Copying over config.json files for services -------------------- 6.50s 2025-04-04 01:50:06.740348 | orchestrator | common : Ensuring config directories exist ------------------------------ 6.28s 2025-04-04 01:50:06.740376 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 5.14s 2025-04-04 01:50:06.740388 | orchestrator | common : Check common containers ---------------------------------------- 4.55s 2025-04-04 01:50:06.740400 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 4.28s 2025-04-04 01:50:06.740413 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.87s 2025-04-04 01:50:06.740425 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.67s 2025-04-04 01:50:06.740437 | orchestrator | common : include_tasks -------------------------------------------------- 3.52s 2025-04-04 01:50:06.740455 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.43s 2025-04-04 01:50:06.741212 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.08s 2025-04-04 01:50:06.741289 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.07s 2025-04-04 01:50:06.741307 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.97s 2025-04-04 01:50:06.741320 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.83s 2025-04-04 01:50:06.741333 | orchestrator | common : include_tasks -------------------------------------------------- 2.29s 2025-04-04 01:50:06.741346 | orchestrator | 2025-04-04 01:50:06 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:06.741408 | orchestrator | 2025-04-04 01:50:06 | INFO  | Task 9e80c6c9-bfa9-461b-81e0-30bf48ca05ea is in state STARTED 2025-04-04 01:50:06.741435 | orchestrator | 2025-04-04 01:50:06 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:06.741830 | orchestrator | 2025-04-04 01:50:06 | INFO  | Task 47c904e3-9e32-4d38-96f7-9d0021928fad is in state STARTED 2025-04-04 01:50:06.742635 | orchestrator | 2025-04-04 01:50:06 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:09.821772 | orchestrator | 2025-04-04 01:50:06 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:09.821904 | orchestrator | 2025-04-04 01:50:09 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:09.823066 | orchestrator | 2025-04-04 01:50:09 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:09.834543 | orchestrator | 2025-04-04 01:50:09 | INFO  | Task 9e80c6c9-bfa9-461b-81e0-30bf48ca05ea is in state STARTED 2025-04-04 01:50:09.837435 | orchestrator | 2025-04-04 01:50:09 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:09.837464 | orchestrator | 2025-04-04 01:50:09 | INFO  | Task 47c904e3-9e32-4d38-96f7-9d0021928fad is in state STARTED 2025-04-04 01:50:09.838268 | orchestrator | 2025-04-04 01:50:09 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:12.876779 | orchestrator | 2025-04-04 01:50:09 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:12.876929 | orchestrator | 2025-04-04 01:50:12 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:12.878495 | orchestrator | 2025-04-04 01:50:12 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:12.879198 | orchestrator | 2025-04-04 01:50:12 | INFO  | Task 9e80c6c9-bfa9-461b-81e0-30bf48ca05ea is in state STARTED 2025-04-04 01:50:12.881122 | orchestrator | 2025-04-04 01:50:12 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:12.882088 | orchestrator | 2025-04-04 01:50:12 | INFO  | Task 47c904e3-9e32-4d38-96f7-9d0021928fad is in state STARTED 2025-04-04 01:50:12.883060 | orchestrator | 2025-04-04 01:50:12 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:15.939210 | orchestrator | 2025-04-04 01:50:12 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:15.939426 | orchestrator | 2025-04-04 01:50:15 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:15.940127 | orchestrator | 2025-04-04 01:50:15 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:15.944178 | orchestrator | 2025-04-04 01:50:15 | INFO  | Task 9e80c6c9-bfa9-461b-81e0-30bf48ca05ea is in state STARTED 2025-04-04 01:50:15.947689 | orchestrator | 2025-04-04 01:50:15 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:15.949091 | orchestrator | 2025-04-04 01:50:15 | INFO  | Task 47c904e3-9e32-4d38-96f7-9d0021928fad is in state STARTED 2025-04-04 01:50:15.949135 | orchestrator | 2025-04-04 01:50:15 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:19.053875 | orchestrator | 2025-04-04 01:50:15 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:19.054076 | orchestrator | 2025-04-04 01:50:19 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:19.054337 | orchestrator | 2025-04-04 01:50:19 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:19.054446 | orchestrator | 2025-04-04 01:50:19 | INFO  | Task 9e80c6c9-bfa9-461b-81e0-30bf48ca05ea is in state STARTED 2025-04-04 01:50:19.055816 | orchestrator | 2025-04-04 01:50:19 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:19.059311 | orchestrator | 2025-04-04 01:50:19 | INFO  | Task 47c904e3-9e32-4d38-96f7-9d0021928fad is in state STARTED 2025-04-04 01:50:22.115561 | orchestrator | 2025-04-04 01:50:19 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:22.115695 | orchestrator | 2025-04-04 01:50:19 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:22.115732 | orchestrator | 2025-04-04 01:50:22 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:22.116061 | orchestrator | 2025-04-04 01:50:22 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:22.116094 | orchestrator | 2025-04-04 01:50:22 | INFO  | Task 9e80c6c9-bfa9-461b-81e0-30bf48ca05ea is in state STARTED 2025-04-04 01:50:22.116882 | orchestrator | 2025-04-04 01:50:22 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:22.118106 | orchestrator | 2025-04-04 01:50:22 | INFO  | Task 47c904e3-9e32-4d38-96f7-9d0021928fad is in state STARTED 2025-04-04 01:50:22.122584 | orchestrator | 2025-04-04 01:50:22 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:25.203018 | orchestrator | 2025-04-04 01:50:22 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:25.203181 | orchestrator | 2025-04-04 01:50:25 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:25.206215 | orchestrator | 2025-04-04 01:50:25 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:25.206606 | orchestrator | 2025-04-04 01:50:25 | INFO  | Task 9e80c6c9-bfa9-461b-81e0-30bf48ca05ea is in state STARTED 2025-04-04 01:50:25.207097 | orchestrator | 2025-04-04 01:50:25 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:25.207654 | orchestrator | 2025-04-04 01:50:25 | INFO  | Task 47c904e3-9e32-4d38-96f7-9d0021928fad is in state STARTED 2025-04-04 01:50:25.208661 | orchestrator | 2025-04-04 01:50:25 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:28.258134 | orchestrator | 2025-04-04 01:50:25 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:28.258284 | orchestrator | 2025-04-04 01:50:28 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:28.260927 | orchestrator | 2025-04-04 01:50:28 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:28.262794 | orchestrator | 2025-04-04 01:50:28 | INFO  | Task 9e80c6c9-bfa9-461b-81e0-30bf48ca05ea is in state STARTED 2025-04-04 01:50:28.265062 | orchestrator | 2025-04-04 01:50:28 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:28.266826 | orchestrator | 2025-04-04 01:50:28 | INFO  | Task 47c904e3-9e32-4d38-96f7-9d0021928fad is in state STARTED 2025-04-04 01:50:28.268692 | orchestrator | 2025-04-04 01:50:28 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:31.309889 | orchestrator | 2025-04-04 01:50:28 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:31.310078 | orchestrator | 2025-04-04 01:50:31 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:31.310658 | orchestrator | 2025-04-04 01:50:31 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:31.313107 | orchestrator | 2025-04-04 01:50:31 | INFO  | Task 9e80c6c9-bfa9-461b-81e0-30bf48ca05ea is in state STARTED 2025-04-04 01:50:31.314324 | orchestrator | 2025-04-04 01:50:31 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:31.314920 | orchestrator | 2025-04-04 01:50:31 | INFO  | Task 47c904e3-9e32-4d38-96f7-9d0021928fad is in state SUCCESS 2025-04-04 01:50:31.315800 | orchestrator | 2025-04-04 01:50:31 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:50:31.317257 | orchestrator | 2025-04-04 01:50:31 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:34.375431 | orchestrator | 2025-04-04 01:50:31 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:34.375570 | orchestrator | 2025-04-04 01:50:34 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:34.375854 | orchestrator | 2025-04-04 01:50:34 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:34.380763 | orchestrator | 2025-04-04 01:50:34 | INFO  | Task 9e80c6c9-bfa9-461b-81e0-30bf48ca05ea is in state STARTED 2025-04-04 01:50:34.381109 | orchestrator | 2025-04-04 01:50:34 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:34.384950 | orchestrator | 2025-04-04 01:50:34 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:50:34.385987 | orchestrator | 2025-04-04 01:50:34 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:37.446693 | orchestrator | 2025-04-04 01:50:34 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:37.446821 | orchestrator | 2025-04-04 01:50:37 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:37.452525 | orchestrator | 2025-04-04 01:50:37 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:40.528486 | orchestrator | 2025-04-04 01:50:37 | INFO  | Task 9e80c6c9-bfa9-461b-81e0-30bf48ca05ea is in state STARTED 2025-04-04 01:50:40.528630 | orchestrator | 2025-04-04 01:50:37 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:40.528650 | orchestrator | 2025-04-04 01:50:37 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:50:40.528665 | orchestrator | 2025-04-04 01:50:37 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:40.528681 | orchestrator | 2025-04-04 01:50:37 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:40.528715 | orchestrator | 2025-04-04 01:50:40 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:40.530553 | orchestrator | 2025-04-04 01:50:40 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:40.530589 | orchestrator | 2025-04-04 01:50:40 | INFO  | Task 9e80c6c9-bfa9-461b-81e0-30bf48ca05ea is in state STARTED 2025-04-04 01:50:40.533818 | orchestrator | 2025-04-04 01:50:40 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:40.537113 | orchestrator | 2025-04-04 01:50:40 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:50:40.537977 | orchestrator | 2025-04-04 01:50:40 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:43.596701 | orchestrator | 2025-04-04 01:50:40 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:43.596849 | orchestrator | 2025-04-04 01:50:43 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:43.597584 | orchestrator | 2025-04-04 01:50:43 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:43.598973 | orchestrator | 2025-04-04 01:50:43 | INFO  | Task 9e80c6c9-bfa9-461b-81e0-30bf48ca05ea is in state SUCCESS 2025-04-04 01:50:43.601052 | orchestrator | 2025-04-04 01:50:43.601086 | orchestrator | 2025-04-04 01:50:43.601100 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-04 01:50:43.601114 | orchestrator | 2025-04-04 01:50:43.601128 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-04 01:50:43.601142 | orchestrator | Friday 04 April 2025 01:50:11 +0000 (0:00:00.610) 0:00:00.610 ********** 2025-04-04 01:50:43.601155 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:50:43.601172 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:50:43.601185 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:50:43.601199 | orchestrator | 2025-04-04 01:50:43.601212 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-04 01:50:43.601225 | orchestrator | Friday 04 April 2025 01:50:11 +0000 (0:00:00.620) 0:00:01.230 ********** 2025-04-04 01:50:43.601240 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-04-04 01:50:43.601254 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-04-04 01:50:43.601267 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-04-04 01:50:43.601280 | orchestrator | 2025-04-04 01:50:43.601294 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-04-04 01:50:43.601307 | orchestrator | 2025-04-04 01:50:43.601320 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-04-04 01:50:43.601334 | orchestrator | Friday 04 April 2025 01:50:12 +0000 (0:00:00.451) 0:00:01.681 ********** 2025-04-04 01:50:43.601377 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:50:43.601391 | orchestrator | 2025-04-04 01:50:43.601403 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-04-04 01:50:43.601416 | orchestrator | Friday 04 April 2025 01:50:13 +0000 (0:00:01.612) 0:00:03.294 ********** 2025-04-04 01:50:43.601428 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-04 01:50:43.601441 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-04 01:50:43.601453 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-04 01:50:43.601466 | orchestrator | 2025-04-04 01:50:43.601478 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-04-04 01:50:43.601491 | orchestrator | Friday 04 April 2025 01:50:15 +0000 (0:00:02.074) 0:00:05.369 ********** 2025-04-04 01:50:43.601503 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-04 01:50:43.601516 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-04 01:50:43.601528 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-04 01:50:43.601540 | orchestrator | 2025-04-04 01:50:43.601553 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-04-04 01:50:43.601566 | orchestrator | Friday 04 April 2025 01:50:21 +0000 (0:00:05.304) 0:00:10.673 ********** 2025-04-04 01:50:43.601578 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:50:43.601607 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:50:43.601620 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:50:43.601632 | orchestrator | 2025-04-04 01:50:43.601645 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-04-04 01:50:43.601694 | orchestrator | Friday 04 April 2025 01:50:25 +0000 (0:00:03.941) 0:00:14.615 ********** 2025-04-04 01:50:43.601708 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:50:43.601721 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:50:43.601733 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:50:43.601746 | orchestrator | 2025-04-04 01:50:43.601763 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 01:50:43.601776 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:50:43.601804 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:50:43.601816 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:50:43.601829 | orchestrator | 2025-04-04 01:50:43.601842 | orchestrator | 2025-04-04 01:50:43.601854 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-04 01:50:43.601866 | orchestrator | Friday 04 April 2025 01:50:29 +0000 (0:00:04.160) 0:00:18.775 ********** 2025-04-04 01:50:43.601879 | orchestrator | =============================================================================== 2025-04-04 01:50:43.601891 | orchestrator | memcached : Copying over config.json files for services ----------------- 5.30s 2025-04-04 01:50:43.601903 | orchestrator | memcached : Restart memcached container --------------------------------- 4.16s 2025-04-04 01:50:43.601915 | orchestrator | memcached : Check memcached container ----------------------------------- 3.94s 2025-04-04 01:50:43.601928 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.07s 2025-04-04 01:50:43.601940 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.61s 2025-04-04 01:50:43.601952 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.62s 2025-04-04 01:50:43.601964 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-04-04 01:50:43.601977 | orchestrator | 2025-04-04 01:50:43.601989 | orchestrator | 2025-04-04 01:50:43.602001 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-04 01:50:43.602014 | orchestrator | 2025-04-04 01:50:43.602074 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-04 01:50:43.602087 | orchestrator | Friday 04 April 2025 01:50:12 +0000 (0:00:00.571) 0:00:00.571 ********** 2025-04-04 01:50:43.602100 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:50:43.602112 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:50:43.602125 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:50:43.602138 | orchestrator | 2025-04-04 01:50:43.602150 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-04 01:50:43.602174 | orchestrator | Friday 04 April 2025 01:50:12 +0000 (0:00:00.700) 0:00:01.272 ********** 2025-04-04 01:50:43.602188 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-04-04 01:50:43.602200 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-04-04 01:50:43.602213 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-04-04 01:50:43.602225 | orchestrator | 2025-04-04 01:50:43.602238 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-04-04 01:50:43.602250 | orchestrator | 2025-04-04 01:50:43.602263 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-04-04 01:50:43.602275 | orchestrator | Friday 04 April 2025 01:50:13 +0000 (0:00:00.703) 0:00:01.975 ********** 2025-04-04 01:50:43.602290 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:50:43.602303 | orchestrator | 2025-04-04 01:50:43.602315 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-04-04 01:50:43.602328 | orchestrator | Friday 04 April 2025 01:50:16 +0000 (0:00:02.907) 0:00:04.883 ********** 2025-04-04 01:50:43.602360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602471 | orchestrator | 2025-04-04 01:50:43.602484 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-04-04 01:50:43.602496 | orchestrator | Friday 04 April 2025 01:50:20 +0000 (0:00:03.620) 0:00:08.504 ********** 2025-04-04 01:50:43.602509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602616 | orchestrator | 2025-04-04 01:50:43.602629 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-04-04 01:50:43.602641 | orchestrator | Friday 04 April 2025 01:50:24 +0000 (0:00:04.330) 0:00:12.834 ********** 2025-04-04 01:50:43.602654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602746 | orchestrator | 2025-04-04 01:50:43.602758 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-04-04 01:50:43.602771 | orchestrator | Friday 04 April 2025 01:50:28 +0000 (0:00:04.383) 0:00:17.217 ********** 2025-04-04 01:50:43.602784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.602861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-04 01:50:43.604263 | orchestrator | 2025-04-04 01:50:43.604394 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-04 01:50:43.604418 | orchestrator | Friday 04 April 2025 01:50:31 +0000 (0:00:02.654) 0:00:19.872 ********** 2025-04-04 01:50:43.604459 | orchestrator | 2025-04-04 01:50:43.604475 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-04 01:50:43.604489 | orchestrator | Friday 04 April 2025 01:50:31 +0000 (0:00:00.074) 0:00:19.946 ********** 2025-04-04 01:50:43.604503 | orchestrator | 2025-04-04 01:50:43.604517 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-04 01:50:43.604531 | orchestrator | Friday 04 April 2025 01:50:31 +0000 (0:00:00.153) 0:00:20.100 ********** 2025-04-04 01:50:43.604545 | orchestrator | 2025-04-04 01:50:43.604559 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-04-04 01:50:43.604573 | orchestrator | Friday 04 April 2025 01:50:32 +0000 (0:00:00.513) 0:00:20.614 ********** 2025-04-04 01:50:43.604587 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:50:43.604603 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:50:43.604616 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:50:43.604630 | orchestrator | 2025-04-04 01:50:43.604645 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-04-04 01:50:43.604659 | orchestrator | Friday 04 April 2025 01:50:36 +0000 (0:00:03.936) 0:00:24.551 ********** 2025-04-04 01:50:43.604673 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:50:43.604687 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:50:43.604717 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:50:43.604731 | orchestrator | 2025-04-04 01:50:43.604745 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 01:50:43.604760 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:50:43.604775 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:50:43.604790 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:50:43.604804 | orchestrator | 2025-04-04 01:50:43.604818 | orchestrator | 2025-04-04 01:50:43.604833 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-04 01:50:43.604850 | orchestrator | Friday 04 April 2025 01:50:40 +0000 (0:00:04.708) 0:00:29.259 ********** 2025-04-04 01:50:43.604866 | orchestrator | =============================================================================== 2025-04-04 01:50:43.604881 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.71s 2025-04-04 01:50:43.604897 | orchestrator | redis : Copying over redis config files --------------------------------- 4.38s 2025-04-04 01:50:43.604914 | orchestrator | redis : Copying over default config.json files -------------------------- 4.33s 2025-04-04 01:50:43.604929 | orchestrator | redis : Restart redis container ----------------------------------------- 3.94s 2025-04-04 01:50:43.604945 | orchestrator | redis : Ensuring config directories exist ------------------------------- 3.62s 2025-04-04 01:50:43.604961 | orchestrator | redis : include_tasks --------------------------------------------------- 2.91s 2025-04-04 01:50:43.604977 | orchestrator | redis : Check redis containers ------------------------------------------ 2.65s 2025-04-04 01:50:43.604993 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.74s 2025-04-04 01:50:43.605009 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-04-04 01:50:43.605024 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.70s 2025-04-04 01:50:43.605040 | orchestrator | 2025-04-04 01:50:43 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:43.605071 | orchestrator | 2025-04-04 01:50:43 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:50:43.607144 | orchestrator | 2025-04-04 01:50:43 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:46.658479 | orchestrator | 2025-04-04 01:50:43 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:46.658649 | orchestrator | 2025-04-04 01:50:46 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:46.659485 | orchestrator | 2025-04-04 01:50:46 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:46.659514 | orchestrator | 2025-04-04 01:50:46 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:46.660236 | orchestrator | 2025-04-04 01:50:46 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:50:46.661189 | orchestrator | 2025-04-04 01:50:46 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:49.709716 | orchestrator | 2025-04-04 01:50:46 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:49.709874 | orchestrator | 2025-04-04 01:50:49 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:49.710087 | orchestrator | 2025-04-04 01:50:49 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:49.710740 | orchestrator | 2025-04-04 01:50:49 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:49.711648 | orchestrator | 2025-04-04 01:50:49 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:50:49.712361 | orchestrator | 2025-04-04 01:50:49 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:49.713265 | orchestrator | 2025-04-04 01:50:49 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:52.756626 | orchestrator | 2025-04-04 01:50:52 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:52.756820 | orchestrator | 2025-04-04 01:50:52 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:52.757560 | orchestrator | 2025-04-04 01:50:52 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:52.759991 | orchestrator | 2025-04-04 01:50:52 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:50:52.760635 | orchestrator | 2025-04-04 01:50:52 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:55.816191 | orchestrator | 2025-04-04 01:50:52 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:55.816394 | orchestrator | 2025-04-04 01:50:55 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:55.819035 | orchestrator | 2025-04-04 01:50:55 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:55.819962 | orchestrator | 2025-04-04 01:50:55 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:55.820824 | orchestrator | 2025-04-04 01:50:55 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:50:55.821473 | orchestrator | 2025-04-04 01:50:55 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:58.857406 | orchestrator | 2025-04-04 01:50:55 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:50:58.857564 | orchestrator | 2025-04-04 01:50:58 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:50:58.858877 | orchestrator | 2025-04-04 01:50:58 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:50:58.861397 | orchestrator | 2025-04-04 01:50:58 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:50:58.863554 | orchestrator | 2025-04-04 01:50:58 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:50:58.865175 | orchestrator | 2025-04-04 01:50:58 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:50:58.865285 | orchestrator | 2025-04-04 01:50:58 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:01.901099 | orchestrator | 2025-04-04 01:51:01 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:01.902005 | orchestrator | 2025-04-04 01:51:01 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:51:01.902092 | orchestrator | 2025-04-04 01:51:01 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:01.902482 | orchestrator | 2025-04-04 01:51:01 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:01.903061 | orchestrator | 2025-04-04 01:51:01 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:04.978251 | orchestrator | 2025-04-04 01:51:01 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:04.978486 | orchestrator | 2025-04-04 01:51:04 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:04.979545 | orchestrator | 2025-04-04 01:51:04 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:51:04.984719 | orchestrator | 2025-04-04 01:51:04 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:04.985243 | orchestrator | 2025-04-04 01:51:04 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:04.986264 | orchestrator | 2025-04-04 01:51:04 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:08.045581 | orchestrator | 2025-04-04 01:51:04 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:08.045736 | orchestrator | 2025-04-04 01:51:08 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:08.046787 | orchestrator | 2025-04-04 01:51:08 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:51:08.048312 | orchestrator | 2025-04-04 01:51:08 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:08.054917 | orchestrator | 2025-04-04 01:51:08 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:08.060658 | orchestrator | 2025-04-04 01:51:08 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:11.174499 | orchestrator | 2025-04-04 01:51:08 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:11.174652 | orchestrator | 2025-04-04 01:51:11 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:11.176100 | orchestrator | 2025-04-04 01:51:11 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:51:11.179252 | orchestrator | 2025-04-04 01:51:11 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:11.179523 | orchestrator | 2025-04-04 01:51:11 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:11.181167 | orchestrator | 2025-04-04 01:51:11 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:14.233288 | orchestrator | 2025-04-04 01:51:11 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:14.233498 | orchestrator | 2025-04-04 01:51:14 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:14.235572 | orchestrator | 2025-04-04 01:51:14 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:51:14.238323 | orchestrator | 2025-04-04 01:51:14 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:14.239868 | orchestrator | 2025-04-04 01:51:14 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:14.241723 | orchestrator | 2025-04-04 01:51:14 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:14.241862 | orchestrator | 2025-04-04 01:51:14 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:17.304957 | orchestrator | 2025-04-04 01:51:17 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:17.305751 | orchestrator | 2025-04-04 01:51:17 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:51:17.308419 | orchestrator | 2025-04-04 01:51:17 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:17.309302 | orchestrator | 2025-04-04 01:51:17 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:17.310416 | orchestrator | 2025-04-04 01:51:17 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:20.356758 | orchestrator | 2025-04-04 01:51:17 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:20.356883 | orchestrator | 2025-04-04 01:51:20 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:20.360760 | orchestrator | 2025-04-04 01:51:20 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:51:20.361799 | orchestrator | 2025-04-04 01:51:20 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:20.363365 | orchestrator | 2025-04-04 01:51:20 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:20.365295 | orchestrator | 2025-04-04 01:51:20 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:23.413847 | orchestrator | 2025-04-04 01:51:20 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:23.413997 | orchestrator | 2025-04-04 01:51:23 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:23.414855 | orchestrator | 2025-04-04 01:51:23 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:51:23.415448 | orchestrator | 2025-04-04 01:51:23 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:23.416287 | orchestrator | 2025-04-04 01:51:23 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:23.417229 | orchestrator | 2025-04-04 01:51:23 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:26.469945 | orchestrator | 2025-04-04 01:51:23 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:26.470138 | orchestrator | 2025-04-04 01:51:26 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:26.471153 | orchestrator | 2025-04-04 01:51:26 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:51:26.473861 | orchestrator | 2025-04-04 01:51:26 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:26.477544 | orchestrator | 2025-04-04 01:51:26 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:26.480700 | orchestrator | 2025-04-04 01:51:26 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:29.532204 | orchestrator | 2025-04-04 01:51:26 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:29.532434 | orchestrator | 2025-04-04 01:51:29 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:29.532740 | orchestrator | 2025-04-04 01:51:29 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:51:29.532775 | orchestrator | 2025-04-04 01:51:29 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:29.533775 | orchestrator | 2025-04-04 01:51:29 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:29.534688 | orchestrator | 2025-04-04 01:51:29 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:32.573566 | orchestrator | 2025-04-04 01:51:29 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:32.573710 | orchestrator | 2025-04-04 01:51:32 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:32.575146 | orchestrator | 2025-04-04 01:51:32 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:51:32.577021 | orchestrator | 2025-04-04 01:51:32 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:32.577571 | orchestrator | 2025-04-04 01:51:32 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:32.579090 | orchestrator | 2025-04-04 01:51:32 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:35.622966 | orchestrator | 2025-04-04 01:51:32 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:35.623105 | orchestrator | 2025-04-04 01:51:35 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:35.623753 | orchestrator | 2025-04-04 01:51:35 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:51:35.624810 | orchestrator | 2025-04-04 01:51:35 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:35.625774 | orchestrator | 2025-04-04 01:51:35 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:35.626985 | orchestrator | 2025-04-04 01:51:35 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:38.679816 | orchestrator | 2025-04-04 01:51:35 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:38.680018 | orchestrator | 2025-04-04 01:51:38 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:38.680121 | orchestrator | 2025-04-04 01:51:38 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:51:38.681860 | orchestrator | 2025-04-04 01:51:38 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:38.682687 | orchestrator | 2025-04-04 01:51:38 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:38.684305 | orchestrator | 2025-04-04 01:51:38 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:41.729445 | orchestrator | 2025-04-04 01:51:38 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:41.729597 | orchestrator | 2025-04-04 01:51:41 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:41.731815 | orchestrator | 2025-04-04 01:51:41 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:51:41.737065 | orchestrator | 2025-04-04 01:51:41 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:41.738632 | orchestrator | 2025-04-04 01:51:41 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:41.743040 | orchestrator | 2025-04-04 01:51:41 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:44.804852 | orchestrator | 2025-04-04 01:51:41 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:44.804999 | orchestrator | 2025-04-04 01:51:44 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:47.845905 | orchestrator | 2025-04-04 01:51:44 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state STARTED 2025-04-04 01:51:47.846084 | orchestrator | 2025-04-04 01:51:44 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:47.846107 | orchestrator | 2025-04-04 01:51:44 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:47.846123 | orchestrator | 2025-04-04 01:51:44 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:47.846138 | orchestrator | 2025-04-04 01:51:44 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:47.846170 | orchestrator | 2025-04-04 01:51:47 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:47.847173 | orchestrator | 2025-04-04 01:51:47 | INFO  | Task ea6199f4-e3ab-494f-a811-1da64fa92bb3 is in state SUCCESS 2025-04-04 01:51:47.849649 | orchestrator | 2025-04-04 01:51:47.849698 | orchestrator | 2025-04-04 01:51:47.849715 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-04 01:51:47.849731 | orchestrator | 2025-04-04 01:51:47.849747 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-04 01:51:47.849767 | orchestrator | Friday 04 April 2025 01:50:11 +0000 (0:00:00.375) 0:00:00.375 ********** 2025-04-04 01:51:47.849794 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:51:47.849821 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:51:47.849842 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:51:47.849856 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:51:47.849870 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:51:47.849884 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:51:47.849898 | orchestrator | 2025-04-04 01:51:47.849913 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-04 01:51:47.849927 | orchestrator | Friday 04 April 2025 01:50:12 +0000 (0:00:01.013) 0:00:01.388 ********** 2025-04-04 01:51:47.849942 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-04 01:51:47.849956 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-04 01:51:47.849970 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-04 01:51:47.849984 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-04 01:51:47.849998 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-04 01:51:47.850012 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-04 01:51:47.850087 | orchestrator | 2025-04-04 01:51:47.850102 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-04-04 01:51:47.850148 | orchestrator | 2025-04-04 01:51:47.850172 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-04-04 01:51:47.850187 | orchestrator | Friday 04 April 2025 01:50:14 +0000 (0:00:02.199) 0:00:03.588 ********** 2025-04-04 01:51:47.850202 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:51:47.850218 | orchestrator | 2025-04-04 01:51:47.850232 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-04 01:51:47.850246 | orchestrator | Friday 04 April 2025 01:50:20 +0000 (0:00:05.310) 0:00:08.898 ********** 2025-04-04 01:51:47.850260 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-04 01:51:47.850275 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-04 01:51:47.850308 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-04 01:51:47.850364 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-04 01:51:47.850380 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-04 01:51:47.850394 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-04 01:51:47.850408 | orchestrator | 2025-04-04 01:51:47.850422 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-04 01:51:47.850436 | orchestrator | Friday 04 April 2025 01:50:23 +0000 (0:00:03.031) 0:00:11.930 ********** 2025-04-04 01:51:47.850450 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-04 01:51:47.850470 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-04 01:51:47.850484 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-04 01:51:47.850498 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-04 01:51:47.850512 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-04 01:51:47.850525 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-04 01:51:47.850539 | orchestrator | 2025-04-04 01:51:47.850553 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-04 01:51:47.850567 | orchestrator | Friday 04 April 2025 01:50:27 +0000 (0:00:04.117) 0:00:16.048 ********** 2025-04-04 01:51:47.850581 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-04-04 01:51:47.850595 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:51:47.850610 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-04-04 01:51:47.850624 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:51:47.850638 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-04-04 01:51:47.850652 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:51:47.850666 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-04-04 01:51:47.850679 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:51:47.850693 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-04-04 01:51:47.850707 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:51:47.850721 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-04-04 01:51:47.850735 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:51:47.850749 | orchestrator | 2025-04-04 01:51:47.850762 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-04-04 01:51:47.850777 | orchestrator | Friday 04 April 2025 01:50:29 +0000 (0:00:02.125) 0:00:18.173 ********** 2025-04-04 01:51:47.850790 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:51:47.850804 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:51:47.850818 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:51:47.850832 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:51:47.850845 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:51:47.850859 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:51:47.850873 | orchestrator | 2025-04-04 01:51:47.850887 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-04-04 01:51:47.850901 | orchestrator | Friday 04 April 2025 01:50:30 +0000 (0:00:01.501) 0:00:19.674 ********** 2025-04-04 01:51:47.850933 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.850955 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.850980 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.850995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851032 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851149 | orchestrator | 2025-04-04 01:51:47.851171 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-04-04 01:51:47.851186 | orchestrator | Friday 04 April 2025 01:50:33 +0000 (0:00:02.546) 0:00:22.221 ********** 2025-04-04 01:51:47.851200 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851214 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851244 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851425 | orchestrator | 2025-04-04 01:51:47.851440 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-04-04 01:51:47.851454 | orchestrator | Friday 04 April 2025 01:50:38 +0000 (0:00:04.574) 0:00:26.796 ********** 2025-04-04 01:51:47.851468 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:51:47.851483 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:51:47.851497 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:51:47.851510 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:51:47.851524 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:51:47.851538 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:51:47.851552 | orchestrator | 2025-04-04 01:51:47.851566 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-04-04 01:51:47.851580 | orchestrator | Friday 04 April 2025 01:50:41 +0000 (0:00:03.072) 0:00:29.868 ********** 2025-04-04 01:51:47.851594 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:51:47.851608 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:51:47.851621 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:51:47.851635 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:51:47.851649 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:51:47.851663 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:51:47.851677 | orchestrator | 2025-04-04 01:51:47.851691 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-04-04 01:51:47.851705 | orchestrator | Friday 04 April 2025 01:50:45 +0000 (0:00:04.875) 0:00:34.743 ********** 2025-04-04 01:51:47.851719 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:51:47.851732 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:51:47.851746 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:51:47.851760 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:51:47.851774 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:51:47.851788 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:51:47.851801 | orchestrator | 2025-04-04 01:51:47.851815 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-04-04 01:51:47.851829 | orchestrator | Friday 04 April 2025 01:50:47 +0000 (0:00:01.610) 0:00:36.354 ********** 2025-04-04 01:51:47.851844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851859 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851885 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851915 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851930 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851945 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.851988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.852003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-04 01:51:47.852026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.852051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-04 01:51:47.852076 | orchestrator | 2025-04-04 01:51:47.852097 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-04 01:51:47.852123 | orchestrator | Friday 04 April 2025 01:50:51 +0000 (0:00:04.297) 0:00:40.652 ********** 2025-04-04 01:51:47.852137 | orchestrator | 2025-04-04 01:51:47.852152 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-04 01:51:47.852166 | orchestrator | Friday 04 April 2025 01:50:52 +0000 (0:00:00.281) 0:00:40.933 ********** 2025-04-04 01:51:47.852179 | orchestrator | 2025-04-04 01:51:47.852193 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-04 01:51:47.852207 | orchestrator | Friday 04 April 2025 01:50:52 +0000 (0:00:00.787) 0:00:41.721 ********** 2025-04-04 01:51:47.852221 | orchestrator | 2025-04-04 01:51:47.852235 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-04 01:51:47.852249 | orchestrator | Friday 04 April 2025 01:50:53 +0000 (0:00:00.278) 0:00:41.999 ********** 2025-04-04 01:51:47.852263 | orchestrator | 2025-04-04 01:51:47.852277 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-04 01:51:47.852291 | orchestrator | Friday 04 April 2025 01:50:53 +0000 (0:00:00.745) 0:00:42.745 ********** 2025-04-04 01:51:47.852304 | orchestrator | 2025-04-04 01:51:47.852373 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-04 01:51:47.852389 | orchestrator | Friday 04 April 2025 01:50:54 +0000 (0:00:00.236) 0:00:42.981 ********** 2025-04-04 01:51:47.852404 | orchestrator | 2025-04-04 01:51:47.852418 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-04-04 01:51:47.852430 | orchestrator | Friday 04 April 2025 01:50:54 +0000 (0:00:00.407) 0:00:43.388 ********** 2025-04-04 01:51:47.852443 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:51:47.852456 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:51:47.852468 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:51:47.852480 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:51:47.852493 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:51:47.852505 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:51:47.852518 | orchestrator | 2025-04-04 01:51:47.852530 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-04-04 01:51:47.852543 | orchestrator | Friday 04 April 2025 01:51:04 +0000 (0:00:09.962) 0:00:53.351 ********** 2025-04-04 01:51:47.852562 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:51:47.852576 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:51:47.852588 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:51:47.852600 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:51:47.852613 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:51:47.852625 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:51:47.852637 | orchestrator | 2025-04-04 01:51:47.852649 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-04 01:51:47.852662 | orchestrator | Friday 04 April 2025 01:51:07 +0000 (0:00:02.548) 0:00:55.899 ********** 2025-04-04 01:51:47.852674 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:51:47.852687 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:51:47.852700 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:51:47.852720 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:51:47.852734 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:51:47.852746 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:51:47.852758 | orchestrator | 2025-04-04 01:51:47.852771 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-04-04 01:51:47.852783 | orchestrator | Friday 04 April 2025 01:51:19 +0000 (0:00:12.058) 0:01:07.958 ********** 2025-04-04 01:51:47.852796 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-04-04 01:51:47.852808 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-04-04 01:51:47.852821 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-04-04 01:51:47.852833 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-04-04 01:51:47.852852 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-04-04 01:51:47.852865 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-04-04 01:51:47.852877 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-04-04 01:51:47.852890 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-04-04 01:51:47.852902 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-04-04 01:51:47.852915 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-04-04 01:51:47.852935 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-04 01:51:47.852948 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-04-04 01:51:47.852960 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-04-04 01:51:47.852972 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-04 01:51:47.852985 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-04 01:51:47.852997 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-04 01:51:47.853009 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-04 01:51:47.853021 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-04 01:51:47.853033 | orchestrator | 2025-04-04 01:51:47.853046 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-04-04 01:51:47.853058 | orchestrator | Friday 04 April 2025 01:51:28 +0000 (0:00:08.908) 0:01:16.866 ********** 2025-04-04 01:51:47.853070 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-04-04 01:51:47.853083 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-04-04 01:51:47.853095 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:51:47.853108 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-04-04 01:51:47.853120 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:51:47.853132 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:51:47.853145 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-04-04 01:51:47.853157 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-04-04 01:51:47.853169 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-04-04 01:51:47.853182 | orchestrator | 2025-04-04 01:51:47.853194 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-04-04 01:51:47.853207 | orchestrator | Friday 04 April 2025 01:51:31 +0000 (0:00:03.562) 0:01:20.428 ********** 2025-04-04 01:51:47.853219 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-04-04 01:51:47.853231 | orchestrator | skipping: [testbed-node-3] 2025-04-04 01:51:47.853244 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-04-04 01:51:47.853256 | orchestrator | skipping: [testbed-node-4] 2025-04-04 01:51:47.853268 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-04-04 01:51:47.853281 | orchestrator | skipping: [testbed-node-5] 2025-04-04 01:51:47.853293 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-04-04 01:51:47.853311 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-04-04 01:51:47.854172 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-04-04 01:51:47.854315 | orchestrator | 2025-04-04 01:51:47.854371 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-04 01:51:47.854387 | orchestrator | Friday 04 April 2025 01:51:36 +0000 (0:00:04.473) 0:01:24.902 ********** 2025-04-04 01:51:47.854401 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:51:47.854416 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:51:47.854430 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:51:47.854444 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:51:47.854458 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:51:47.854472 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:51:47.854486 | orchestrator | 2025-04-04 01:51:47.854500 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 01:51:47.854515 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-04 01:51:47.854532 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-04 01:51:47.854546 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-04 01:51:47.854560 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-04 01:51:47.854574 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-04 01:51:47.854602 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-04 01:51:47.854616 | orchestrator | 2025-04-04 01:51:47.854631 | orchestrator | 2025-04-04 01:51:47.854645 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-04 01:51:47.854659 | orchestrator | Friday 04 April 2025 01:51:45 +0000 (0:00:09.577) 0:01:34.480 ********** 2025-04-04 01:51:47.854673 | orchestrator | =============================================================================== 2025-04-04 01:51:47.854688 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 21.64s 2025-04-04 01:51:47.854702 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.96s 2025-04-04 01:51:47.854715 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.91s 2025-04-04 01:51:47.854729 | orchestrator | openvswitch : include_tasks --------------------------------------------- 5.31s 2025-04-04 01:51:47.854743 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 4.88s 2025-04-04 01:51:47.854758 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.57s 2025-04-04 01:51:47.854772 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.47s 2025-04-04 01:51:47.854786 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 4.30s 2025-04-04 01:51:47.854799 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 4.12s 2025-04-04 01:51:47.854813 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.56s 2025-04-04 01:51:47.854827 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 3.07s 2025-04-04 01:51:47.854841 | orchestrator | module-load : Load modules ---------------------------------------------- 3.03s 2025-04-04 01:51:47.854860 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.74s 2025-04-04 01:51:47.854875 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.55s 2025-04-04 01:51:47.854888 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.55s 2025-04-04 01:51:47.854902 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.20s 2025-04-04 01:51:47.854916 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.13s 2025-04-04 01:51:47.854940 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.61s 2025-04-04 01:51:47.854954 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.50s 2025-04-04 01:51:47.854968 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.01s 2025-04-04 01:51:47.854982 | orchestrator | 2025-04-04 01:51:47 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:51:47.855014 | orchestrator | 2025-04-04 01:51:47 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:47.856191 | orchestrator | 2025-04-04 01:51:47 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:47.856849 | orchestrator | 2025-04-04 01:51:47 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:50.902227 | orchestrator | 2025-04-04 01:51:47 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:50.902407 | orchestrator | 2025-04-04 01:51:50 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:50.902673 | orchestrator | 2025-04-04 01:51:50 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:51:50.903709 | orchestrator | 2025-04-04 01:51:50 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:50.904797 | orchestrator | 2025-04-04 01:51:50 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:50.905816 | orchestrator | 2025-04-04 01:51:50 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:53.961822 | orchestrator | 2025-04-04 01:51:50 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:53.961966 | orchestrator | 2025-04-04 01:51:53 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:53.968113 | orchestrator | 2025-04-04 01:51:53 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:51:53.975766 | orchestrator | 2025-04-04 01:51:53 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:53.976456 | orchestrator | 2025-04-04 01:51:53 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:53.983043 | orchestrator | 2025-04-04 01:51:53 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:51:57.037256 | orchestrator | 2025-04-04 01:51:53 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:51:57.037450 | orchestrator | 2025-04-04 01:51:57 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:51:57.037644 | orchestrator | 2025-04-04 01:51:57 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:51:57.038804 | orchestrator | 2025-04-04 01:51:57 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:51:57.042738 | orchestrator | 2025-04-04 01:51:57 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:51:57.043683 | orchestrator | 2025-04-04 01:51:57 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:00.121818 | orchestrator | 2025-04-04 01:51:57 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:00.121970 | orchestrator | 2025-04-04 01:52:00 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:00.122096 | orchestrator | 2025-04-04 01:52:00 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:00.122117 | orchestrator | 2025-04-04 01:52:00 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:00.122165 | orchestrator | 2025-04-04 01:52:00 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:00.122788 | orchestrator | 2025-04-04 01:52:00 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:03.172903 | orchestrator | 2025-04-04 01:52:00 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:03.173055 | orchestrator | 2025-04-04 01:52:03 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:03.174672 | orchestrator | 2025-04-04 01:52:03 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:03.177137 | orchestrator | 2025-04-04 01:52:03 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:03.178867 | orchestrator | 2025-04-04 01:52:03 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:03.180770 | orchestrator | 2025-04-04 01:52:03 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:03.180973 | orchestrator | 2025-04-04 01:52:03 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:06.247955 | orchestrator | 2025-04-04 01:52:06 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:06.249537 | orchestrator | 2025-04-04 01:52:06 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:06.251520 | orchestrator | 2025-04-04 01:52:06 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:06.256342 | orchestrator | 2025-04-04 01:52:06 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:09.318292 | orchestrator | 2025-04-04 01:52:06 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:09.318411 | orchestrator | 2025-04-04 01:52:06 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:09.318442 | orchestrator | 2025-04-04 01:52:09 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:09.320116 | orchestrator | 2025-04-04 01:52:09 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:09.320585 | orchestrator | 2025-04-04 01:52:09 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:09.320986 | orchestrator | 2025-04-04 01:52:09 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:09.325497 | orchestrator | 2025-04-04 01:52:09 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:12.381709 | orchestrator | 2025-04-04 01:52:09 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:12.381853 | orchestrator | 2025-04-04 01:52:12 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:12.382510 | orchestrator | 2025-04-04 01:52:12 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:12.384174 | orchestrator | 2025-04-04 01:52:12 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:12.384849 | orchestrator | 2025-04-04 01:52:12 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:12.387415 | orchestrator | 2025-04-04 01:52:12 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:15.441986 | orchestrator | 2025-04-04 01:52:12 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:15.442198 | orchestrator | 2025-04-04 01:52:15 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:15.447636 | orchestrator | 2025-04-04 01:52:15 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:15.451095 | orchestrator | 2025-04-04 01:52:15 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:15.451124 | orchestrator | 2025-04-04 01:52:15 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:15.452929 | orchestrator | 2025-04-04 01:52:15 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:18.527354 | orchestrator | 2025-04-04 01:52:15 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:18.527523 | orchestrator | 2025-04-04 01:52:18 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:18.528737 | orchestrator | 2025-04-04 01:52:18 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:18.530137 | orchestrator | 2025-04-04 01:52:18 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:18.531496 | orchestrator | 2025-04-04 01:52:18 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:18.533116 | orchestrator | 2025-04-04 01:52:18 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:18.533227 | orchestrator | 2025-04-04 01:52:18 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:21.587815 | orchestrator | 2025-04-04 01:52:21 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:21.591359 | orchestrator | 2025-04-04 01:52:21 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:21.593595 | orchestrator | 2025-04-04 01:52:21 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:21.596606 | orchestrator | 2025-04-04 01:52:21 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:21.598554 | orchestrator | 2025-04-04 01:52:21 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:21.598615 | orchestrator | 2025-04-04 01:52:21 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:24.656984 | orchestrator | 2025-04-04 01:52:24 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:24.657217 | orchestrator | 2025-04-04 01:52:24 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:24.657251 | orchestrator | 2025-04-04 01:52:24 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:24.657952 | orchestrator | 2025-04-04 01:52:24 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:24.658727 | orchestrator | 2025-04-04 01:52:24 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:27.709545 | orchestrator | 2025-04-04 01:52:24 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:27.709722 | orchestrator | 2025-04-04 01:52:27 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:27.710845 | orchestrator | 2025-04-04 01:52:27 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:27.714506 | orchestrator | 2025-04-04 01:52:27 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:27.717019 | orchestrator | 2025-04-04 01:52:27 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:27.718824 | orchestrator | 2025-04-04 01:52:27 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:27.719138 | orchestrator | 2025-04-04 01:52:27 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:30.771675 | orchestrator | 2025-04-04 01:52:30 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:30.772385 | orchestrator | 2025-04-04 01:52:30 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:30.777863 | orchestrator | 2025-04-04 01:52:30 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:30.784973 | orchestrator | 2025-04-04 01:52:30 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:30.787030 | orchestrator | 2025-04-04 01:52:30 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:33.835282 | orchestrator | 2025-04-04 01:52:30 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:33.835453 | orchestrator | 2025-04-04 01:52:33 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:33.835530 | orchestrator | 2025-04-04 01:52:33 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:33.841330 | orchestrator | 2025-04-04 01:52:33 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:33.842370 | orchestrator | 2025-04-04 01:52:33 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:33.842941 | orchestrator | 2025-04-04 01:52:33 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:36.949076 | orchestrator | 2025-04-04 01:52:33 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:36.949229 | orchestrator | 2025-04-04 01:52:36 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:36.951807 | orchestrator | 2025-04-04 01:52:36 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:36.954919 | orchestrator | 2025-04-04 01:52:36 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:36.959086 | orchestrator | 2025-04-04 01:52:36 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:36.960658 | orchestrator | 2025-04-04 01:52:36 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:36.960949 | orchestrator | 2025-04-04 01:52:36 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:40.070296 | orchestrator | 2025-04-04 01:52:40 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:40.075675 | orchestrator | 2025-04-04 01:52:40 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:40.078117 | orchestrator | 2025-04-04 01:52:40 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:40.083554 | orchestrator | 2025-04-04 01:52:40 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:43.126955 | orchestrator | 2025-04-04 01:52:40 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:43.127087 | orchestrator | 2025-04-04 01:52:40 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:43.127125 | orchestrator | 2025-04-04 01:52:43 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:43.129035 | orchestrator | 2025-04-04 01:52:43 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:43.130856 | orchestrator | 2025-04-04 01:52:43 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:43.132013 | orchestrator | 2025-04-04 01:52:43 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:43.132969 | orchestrator | 2025-04-04 01:52:43 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:46.192751 | orchestrator | 2025-04-04 01:52:43 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:46.192921 | orchestrator | 2025-04-04 01:52:46 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:46.195764 | orchestrator | 2025-04-04 01:52:46 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:46.195788 | orchestrator | 2025-04-04 01:52:46 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:46.196805 | orchestrator | 2025-04-04 01:52:46 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:46.197964 | orchestrator | 2025-04-04 01:52:46 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:49.250000 | orchestrator | 2025-04-04 01:52:46 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:49.250222 | orchestrator | 2025-04-04 01:52:49 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:49.252753 | orchestrator | 2025-04-04 01:52:49 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:49.255420 | orchestrator | 2025-04-04 01:52:49 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:49.259394 | orchestrator | 2025-04-04 01:52:49 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:49.259445 | orchestrator | 2025-04-04 01:52:49 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:52.314202 | orchestrator | 2025-04-04 01:52:49 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:52.314417 | orchestrator | 2025-04-04 01:52:52 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:52.317322 | orchestrator | 2025-04-04 01:52:52 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:52.319808 | orchestrator | 2025-04-04 01:52:52 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:52.321742 | orchestrator | 2025-04-04 01:52:52 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:52.324092 | orchestrator | 2025-04-04 01:52:52 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:52.324390 | orchestrator | 2025-04-04 01:52:52 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:55.389488 | orchestrator | 2025-04-04 01:52:55 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:55.390482 | orchestrator | 2025-04-04 01:52:55 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:55.394176 | orchestrator | 2025-04-04 01:52:55 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:55.395596 | orchestrator | 2025-04-04 01:52:55 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:55.396670 | orchestrator | 2025-04-04 01:52:55 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:55.396981 | orchestrator | 2025-04-04 01:52:55 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:52:58.468454 | orchestrator | 2025-04-04 01:52:58 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:52:58.473604 | orchestrator | 2025-04-04 01:52:58 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:52:58.474111 | orchestrator | 2025-04-04 01:52:58 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:52:58.476926 | orchestrator | 2025-04-04 01:52:58 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state STARTED 2025-04-04 01:52:58.477450 | orchestrator | 2025-04-04 01:52:58 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:52:58.477667 | orchestrator | 2025-04-04 01:52:58 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:01.535037 | orchestrator | 2025-04-04 01:53:01 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:01.538952 | orchestrator | 2025-04-04 01:53:01 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:01.539872 | orchestrator | 2025-04-04 01:53:01 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:01.540959 | orchestrator | 2025-04-04 01:53:01 | INFO  | Task 4700ab06-d1be-44f1-9c11-804d62428201 is in state SUCCESS 2025-04-04 01:53:01.540993 | orchestrator | 2025-04-04 01:53:01.541010 | orchestrator | 2025-04-04 01:53:01.541025 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-04-04 01:53:01.541056 | orchestrator | 2025-04-04 01:53:01.541071 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-04-04 01:53:01.541085 | orchestrator | Friday 04 April 2025 01:50:37 +0000 (0:00:00.441) 0:00:00.441 ********** 2025-04-04 01:53:01.541099 | orchestrator | ok: [localhost] => { 2025-04-04 01:53:01.541116 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-04-04 01:53:01.541130 | orchestrator | } 2025-04-04 01:53:01.541144 | orchestrator | 2025-04-04 01:53:01.541158 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-04-04 01:53:01.541172 | orchestrator | Friday 04 April 2025 01:50:37 +0000 (0:00:00.177) 0:00:00.618 ********** 2025-04-04 01:53:01.541187 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-04-04 01:53:01.541202 | orchestrator | ...ignoring 2025-04-04 01:53:01.541216 | orchestrator | 2025-04-04 01:53:01.541230 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-04-04 01:53:01.541244 | orchestrator | Friday 04 April 2025 01:50:40 +0000 (0:00:03.313) 0:00:03.931 ********** 2025-04-04 01:53:01.541258 | orchestrator | skipping: [localhost] 2025-04-04 01:53:01.541358 | orchestrator | 2025-04-04 01:53:01.541377 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-04-04 01:53:01.541391 | orchestrator | Friday 04 April 2025 01:50:40 +0000 (0:00:00.145) 0:00:04.077 ********** 2025-04-04 01:53:01.541405 | orchestrator | ok: [localhost] 2025-04-04 01:53:01.541419 | orchestrator | 2025-04-04 01:53:01.541433 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-04 01:53:01.541447 | orchestrator | 2025-04-04 01:53:01.541461 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-04 01:53:01.541475 | orchestrator | Friday 04 April 2025 01:50:41 +0000 (0:00:00.504) 0:00:04.581 ********** 2025-04-04 01:53:01.541489 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:53:01.541503 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:53:01.541516 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:53:01.541530 | orchestrator | 2025-04-04 01:53:01.541544 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-04 01:53:01.541558 | orchestrator | Friday 04 April 2025 01:50:42 +0000 (0:00:01.184) 0:00:05.766 ********** 2025-04-04 01:53:01.541572 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-04-04 01:53:01.541586 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-04-04 01:53:01.541600 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-04-04 01:53:01.541639 | orchestrator | 2025-04-04 01:53:01.541653 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-04-04 01:53:01.541667 | orchestrator | 2025-04-04 01:53:01.541681 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-04 01:53:01.541695 | orchestrator | Friday 04 April 2025 01:50:44 +0000 (0:00:01.788) 0:00:07.554 ********** 2025-04-04 01:53:01.541709 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:53:01.541723 | orchestrator | 2025-04-04 01:53:01.541737 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-04 01:53:01.541751 | orchestrator | Friday 04 April 2025 01:50:45 +0000 (0:00:01.321) 0:00:08.876 ********** 2025-04-04 01:53:01.541765 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:53:01.541779 | orchestrator | 2025-04-04 01:53:01.541793 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-04-04 01:53:01.541807 | orchestrator | Friday 04 April 2025 01:50:47 +0000 (0:00:01.788) 0:00:10.665 ********** 2025-04-04 01:53:01.541821 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:53:01.541836 | orchestrator | 2025-04-04 01:53:01.541850 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-04-04 01:53:01.541864 | orchestrator | Friday 04 April 2025 01:50:47 +0000 (0:00:00.466) 0:00:11.131 ********** 2025-04-04 01:53:01.541878 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:53:01.541891 | orchestrator | 2025-04-04 01:53:01.541905 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-04-04 01:53:01.541924 | orchestrator | Friday 04 April 2025 01:50:48 +0000 (0:00:01.277) 0:00:12.409 ********** 2025-04-04 01:53:01.541938 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:53:01.541952 | orchestrator | 2025-04-04 01:53:01.541966 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-04-04 01:53:01.541980 | orchestrator | Friday 04 April 2025 01:50:50 +0000 (0:00:01.035) 0:00:13.445 ********** 2025-04-04 01:53:01.541994 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:53:01.542007 | orchestrator | 2025-04-04 01:53:01.542085 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-04 01:53:01.542102 | orchestrator | Friday 04 April 2025 01:50:50 +0000 (0:00:00.622) 0:00:14.067 ********** 2025-04-04 01:53:01.542116 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:53:01.542131 | orchestrator | 2025-04-04 01:53:01.542144 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-04 01:53:01.542159 | orchestrator | Friday 04 April 2025 01:50:52 +0000 (0:00:01.532) 0:00:15.599 ********** 2025-04-04 01:53:01.542172 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:53:01.542186 | orchestrator | 2025-04-04 01:53:01.542201 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-04-04 01:53:01.542214 | orchestrator | Friday 04 April 2025 01:50:53 +0000 (0:00:01.293) 0:00:16.893 ********** 2025-04-04 01:53:01.542228 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:53:01.542242 | orchestrator | 2025-04-04 01:53:01.542256 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-04-04 01:53:01.542270 | orchestrator | Friday 04 April 2025 01:50:54 +0000 (0:00:00.735) 0:00:17.629 ********** 2025-04-04 01:53:01.542284 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:53:01.542320 | orchestrator | 2025-04-04 01:53:01.542347 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-04-04 01:53:01.542364 | orchestrator | Friday 04 April 2025 01:50:54 +0000 (0:00:00.654) 0:00:18.284 ********** 2025-04-04 01:53:01.542380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-04 01:53:01.542410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-04 01:53:01.542434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-04 01:53:01.542449 | orchestrator | 2025-04-04 01:53:01.542464 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-04-04 01:53:01.542478 | orchestrator | Friday 04 April 2025 01:50:57 +0000 (0:00:02.513) 0:00:20.798 ********** 2025-04-04 01:53:01.542505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-04 01:53:01.542528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-04 01:53:01.542549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-04 01:53:01.542564 | orchestrator | 2025-04-04 01:53:01.542578 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-04-04 01:53:01.542592 | orchestrator | Friday 04 April 2025 01:50:59 +0000 (0:00:01.864) 0:00:22.662 ********** 2025-04-04 01:53:01.542606 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-04 01:53:01.542620 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-04 01:53:01.542634 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-04 01:53:01.542648 | orchestrator | 2025-04-04 01:53:01.542662 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-04-04 01:53:01.542676 | orchestrator | Friday 04 April 2025 01:51:01 +0000 (0:00:02.067) 0:00:24.729 ********** 2025-04-04 01:53:01.542690 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-04 01:53:01.542704 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-04 01:53:01.542718 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-04 01:53:01.542731 | orchestrator | 2025-04-04 01:53:01.542745 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-04-04 01:53:01.542759 | orchestrator | Friday 04 April 2025 01:51:03 +0000 (0:00:02.312) 0:00:27.042 ********** 2025-04-04 01:53:01.542773 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-04 01:53:01.542787 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-04 01:53:01.542807 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-04 01:53:01.542821 | orchestrator | 2025-04-04 01:53:01.542842 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-04-04 01:53:01.542857 | orchestrator | Friday 04 April 2025 01:51:05 +0000 (0:00:02.222) 0:00:29.264 ********** 2025-04-04 01:53:01.542871 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-04 01:53:01.542885 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-04 01:53:01.542899 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-04 01:53:01.542913 | orchestrator | 2025-04-04 01:53:01.542927 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-04-04 01:53:01.542941 | orchestrator | Friday 04 April 2025 01:51:11 +0000 (0:00:05.253) 0:00:34.518 ********** 2025-04-04 01:53:01.542955 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-04 01:53:01.542969 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-04 01:53:01.542983 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-04 01:53:01.542996 | orchestrator | 2025-04-04 01:53:01.543010 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-04-04 01:53:01.543024 | orchestrator | Friday 04 April 2025 01:51:13 +0000 (0:00:02.330) 0:00:36.848 ********** 2025-04-04 01:53:01.543038 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-04 01:53:01.543052 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-04 01:53:01.543065 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-04 01:53:01.543079 | orchestrator | 2025-04-04 01:53:01.543093 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-04 01:53:01.543113 | orchestrator | Friday 04 April 2025 01:51:16 +0000 (0:00:02.696) 0:00:39.545 ********** 2025-04-04 01:53:01.543127 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:53:01.543141 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:53:01.543155 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:53:01.543169 | orchestrator | 2025-04-04 01:53:01.543183 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-04-04 01:53:01.543196 | orchestrator | Friday 04 April 2025 01:51:16 +0000 (0:00:00.756) 0:00:40.301 ********** 2025-04-04 01:53:01.543211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-04 01:53:01.543226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-04 01:53:01.543261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-04 01:53:01.543277 | orchestrator | 2025-04-04 01:53:01.543310 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-04-04 01:53:01.543324 | orchestrator | Friday 04 April 2025 01:51:19 +0000 (0:00:02.142) 0:00:42.444 ********** 2025-04-04 01:53:01.543338 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:53:01.543353 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:53:01.543366 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:53:01.543380 | orchestrator | 2025-04-04 01:53:01.543394 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-04-04 01:53:01.543408 | orchestrator | Friday 04 April 2025 01:51:20 +0000 (0:00:01.494) 0:00:43.938 ********** 2025-04-04 01:53:01.543422 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:53:01.543436 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:53:01.543449 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:53:01.543463 | orchestrator | 2025-04-04 01:53:01.543477 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-04-04 01:53:01.543491 | orchestrator | Friday 04 April 2025 01:51:26 +0000 (0:00:06.182) 0:00:50.120 ********** 2025-04-04 01:53:01.543505 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:53:01.543519 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:53:01.543533 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:53:01.543546 | orchestrator | 2025-04-04 01:53:01.543560 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-04 01:53:01.543574 | orchestrator | 2025-04-04 01:53:01.543588 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-04 01:53:01.543602 | orchestrator | Friday 04 April 2025 01:51:27 +0000 (0:00:00.713) 0:00:50.834 ********** 2025-04-04 01:53:01.543615 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:53:01.543629 | orchestrator | 2025-04-04 01:53:01.543643 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-04 01:53:01.543657 | orchestrator | Friday 04 April 2025 01:51:28 +0000 (0:00:00.796) 0:00:51.631 ********** 2025-04-04 01:53:01.543683 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:53:01.543698 | orchestrator | 2025-04-04 01:53:01.543711 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-04 01:53:01.543725 | orchestrator | Friday 04 April 2025 01:51:28 +0000 (0:00:00.341) 0:00:51.973 ********** 2025-04-04 01:53:01.543739 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:53:01.543753 | orchestrator | 2025-04-04 01:53:01.543767 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-04 01:53:01.543781 | orchestrator | Friday 04 April 2025 01:51:30 +0000 (0:00:02.348) 0:00:54.321 ********** 2025-04-04 01:53:01.543795 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:53:01.543808 | orchestrator | 2025-04-04 01:53:01.543822 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-04 01:53:01.543836 | orchestrator | 2025-04-04 01:53:01.543850 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-04 01:53:01.543864 | orchestrator | Friday 04 April 2025 01:52:22 +0000 (0:00:51.280) 0:01:45.602 ********** 2025-04-04 01:53:01.543878 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:53:01.543891 | orchestrator | 2025-04-04 01:53:01.543905 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-04 01:53:01.543919 | orchestrator | Friday 04 April 2025 01:52:22 +0000 (0:00:00.682) 0:01:46.284 ********** 2025-04-04 01:53:01.543933 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:53:01.543947 | orchestrator | 2025-04-04 01:53:01.543961 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-04 01:53:01.543975 | orchestrator | Friday 04 April 2025 01:52:23 +0000 (0:00:00.239) 0:01:46.524 ********** 2025-04-04 01:53:01.543988 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:53:01.544002 | orchestrator | 2025-04-04 01:53:01.544016 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-04 01:53:01.544030 | orchestrator | Friday 04 April 2025 01:52:24 +0000 (0:00:01.874) 0:01:48.398 ********** 2025-04-04 01:53:01.544044 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:53:01.544057 | orchestrator | 2025-04-04 01:53:01.544071 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-04 01:53:01.544085 | orchestrator | 2025-04-04 01:53:01.544099 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-04 01:53:01.544113 | orchestrator | Friday 04 April 2025 01:52:38 +0000 (0:00:13.328) 0:02:01.726 ********** 2025-04-04 01:53:01.544127 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:53:01.544141 | orchestrator | 2025-04-04 01:53:01.544155 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-04 01:53:01.544168 | orchestrator | Friday 04 April 2025 01:52:39 +0000 (0:00:00.955) 0:02:02.682 ********** 2025-04-04 01:53:01.544182 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:53:01.544203 | orchestrator | 2025-04-04 01:53:01.544222 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-04 01:53:01.544243 | orchestrator | Friday 04 April 2025 01:52:40 +0000 (0:00:00.779) 0:02:03.461 ********** 2025-04-04 01:53:01.544257 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:53:01.544271 | orchestrator | 2025-04-04 01:53:01.544286 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-04 01:53:01.544343 | orchestrator | Friday 04 April 2025 01:52:42 +0000 (0:00:02.718) 0:02:06.180 ********** 2025-04-04 01:53:01.544358 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:53:01.544372 | orchestrator | 2025-04-04 01:53:01.544386 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-04-04 01:53:01.544400 | orchestrator | 2025-04-04 01:53:01.544414 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-04-04 01:53:01.544428 | orchestrator | Friday 04 April 2025 01:52:54 +0000 (0:00:11.701) 0:02:17.881 ********** 2025-04-04 01:53:01.544442 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:53:01.544456 | orchestrator | 2025-04-04 01:53:01.544470 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-04-04 01:53:01.544491 | orchestrator | Friday 04 April 2025 01:52:55 +0000 (0:00:00.905) 0:02:18.786 ********** 2025-04-04 01:53:01.544505 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-04 01:53:01.544519 | orchestrator | enable_outward_rabbitmq_True 2025-04-04 01:53:01.544534 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-04 01:53:01.544547 | orchestrator | outward_rabbitmq_restart 2025-04-04 01:53:01.544562 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:53:01.544576 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:53:01.544590 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:53:01.544604 | orchestrator | 2025-04-04 01:53:01.544618 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-04-04 01:53:01.544632 | orchestrator | skipping: no hosts matched 2025-04-04 01:53:01.544646 | orchestrator | 2025-04-04 01:53:01.544660 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-04-04 01:53:01.544674 | orchestrator | skipping: no hosts matched 2025-04-04 01:53:01.544688 | orchestrator | 2025-04-04 01:53:01.544701 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-04-04 01:53:01.544715 | orchestrator | skipping: no hosts matched 2025-04-04 01:53:01.544729 | orchestrator | 2025-04-04 01:53:01.544743 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 01:53:01.544757 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-04-04 01:53:01.544772 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-04 01:53:01.544786 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 01:53:01.544800 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-04 01:53:01.544814 | orchestrator | 2025-04-04 01:53:01.544828 | orchestrator | 2025-04-04 01:53:01.544842 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-04 01:53:01.544856 | orchestrator | Friday 04 April 2025 01:52:59 +0000 (0:00:03.677) 0:02:22.464 ********** 2025-04-04 01:53:01.544870 | orchestrator | =============================================================================== 2025-04-04 01:53:01.544884 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 76.31s 2025-04-04 01:53:01.544898 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.94s 2025-04-04 01:53:01.544911 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.18s 2025-04-04 01:53:01.544925 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 5.25s 2025-04-04 01:53:01.544939 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.68s 2025-04-04 01:53:01.544953 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.31s 2025-04-04 01:53:01.544966 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.70s 2025-04-04 01:53:01.544981 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.51s 2025-04-04 01:53:01.544995 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.43s 2025-04-04 01:53:01.545009 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.33s 2025-04-04 01:53:01.545022 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.31s 2025-04-04 01:53:01.545036 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.22s 2025-04-04 01:53:01.545050 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.14s 2025-04-04 01:53:01.545064 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.07s 2025-04-04 01:53:01.545085 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.86s 2025-04-04 01:53:01.545104 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.79s 2025-04-04 01:53:01.545119 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.79s 2025-04-04 01:53:01.545133 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.53s 2025-04-04 01:53:01.545146 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.49s 2025-04-04 01:53:01.545160 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.36s 2025-04-04 01:53:01.545180 | orchestrator | 2025-04-04 01:53:01 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:04.621993 | orchestrator | 2025-04-04 01:53:01 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:04.622232 | orchestrator | 2025-04-04 01:53:04 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:04.623791 | orchestrator | 2025-04-04 01:53:04 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:04.629367 | orchestrator | 2025-04-04 01:53:04 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:07.689691 | orchestrator | 2025-04-04 01:53:04 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:07.689817 | orchestrator | 2025-04-04 01:53:04 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:07.689854 | orchestrator | 2025-04-04 01:53:07 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:07.690208 | orchestrator | 2025-04-04 01:53:07 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:07.693200 | orchestrator | 2025-04-04 01:53:07 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:07.694178 | orchestrator | 2025-04-04 01:53:07 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:07.694398 | orchestrator | 2025-04-04 01:53:07 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:10.759134 | orchestrator | 2025-04-04 01:53:10 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:10.760396 | orchestrator | 2025-04-04 01:53:10 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:10.761602 | orchestrator | 2025-04-04 01:53:10 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:10.762732 | orchestrator | 2025-04-04 01:53:10 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:13.813210 | orchestrator | 2025-04-04 01:53:10 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:13.813402 | orchestrator | 2025-04-04 01:53:13 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:13.817723 | orchestrator | 2025-04-04 01:53:13 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:13.818364 | orchestrator | 2025-04-04 01:53:13 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:13.818509 | orchestrator | 2025-04-04 01:53:13 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:16.885343 | orchestrator | 2025-04-04 01:53:13 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:16.885495 | orchestrator | 2025-04-04 01:53:16 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:16.886619 | orchestrator | 2025-04-04 01:53:16 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:16.887909 | orchestrator | 2025-04-04 01:53:16 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:16.888911 | orchestrator | 2025-04-04 01:53:16 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:16.889160 | orchestrator | 2025-04-04 01:53:16 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:19.939144 | orchestrator | 2025-04-04 01:53:19 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:19.940592 | orchestrator | 2025-04-04 01:53:19 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:19.943491 | orchestrator | 2025-04-04 01:53:19 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:19.945282 | orchestrator | 2025-04-04 01:53:19 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:19.946352 | orchestrator | 2025-04-04 01:53:19 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:23.018220 | orchestrator | 2025-04-04 01:53:23 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:23.018973 | orchestrator | 2025-04-04 01:53:23 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:23.021800 | orchestrator | 2025-04-04 01:53:23 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:23.022121 | orchestrator | 2025-04-04 01:53:23 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:26.073930 | orchestrator | 2025-04-04 01:53:23 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:26.074072 | orchestrator | 2025-04-04 01:53:26 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:26.076029 | orchestrator | 2025-04-04 01:53:26 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:26.078682 | orchestrator | 2025-04-04 01:53:26 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:26.083871 | orchestrator | 2025-04-04 01:53:26 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:29.133281 | orchestrator | 2025-04-04 01:53:26 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:29.133485 | orchestrator | 2025-04-04 01:53:29 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:29.135562 | orchestrator | 2025-04-04 01:53:29 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:32.202607 | orchestrator | 2025-04-04 01:53:29 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:32.202725 | orchestrator | 2025-04-04 01:53:29 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:32.202743 | orchestrator | 2025-04-04 01:53:29 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:32.202775 | orchestrator | 2025-04-04 01:53:32 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:32.203611 | orchestrator | 2025-04-04 01:53:32 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:32.205751 | orchestrator | 2025-04-04 01:53:32 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:32.207991 | orchestrator | 2025-04-04 01:53:32 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:35.254193 | orchestrator | 2025-04-04 01:53:32 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:35.254411 | orchestrator | 2025-04-04 01:53:35 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:35.256694 | orchestrator | 2025-04-04 01:53:35 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:35.257473 | orchestrator | 2025-04-04 01:53:35 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:35.262525 | orchestrator | 2025-04-04 01:53:35 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:38.328990 | orchestrator | 2025-04-04 01:53:35 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:38.329175 | orchestrator | 2025-04-04 01:53:38 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:38.330482 | orchestrator | 2025-04-04 01:53:38 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:38.330518 | orchestrator | 2025-04-04 01:53:38 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:38.331744 | orchestrator | 2025-04-04 01:53:38 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:38.331819 | orchestrator | 2025-04-04 01:53:38 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:41.387633 | orchestrator | 2025-04-04 01:53:41 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:41.388506 | orchestrator | 2025-04-04 01:53:41 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:41.388539 | orchestrator | 2025-04-04 01:53:41 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:41.388561 | orchestrator | 2025-04-04 01:53:41 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:41.388692 | orchestrator | 2025-04-04 01:53:41 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:44.444774 | orchestrator | 2025-04-04 01:53:44 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:47.507689 | orchestrator | 2025-04-04 01:53:44 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:47.507817 | orchestrator | 2025-04-04 01:53:44 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:47.507838 | orchestrator | 2025-04-04 01:53:44 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:47.507856 | orchestrator | 2025-04-04 01:53:44 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:47.507893 | orchestrator | 2025-04-04 01:53:47 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:47.509679 | orchestrator | 2025-04-04 01:53:47 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:47.510970 | orchestrator | 2025-04-04 01:53:47 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:47.512988 | orchestrator | 2025-04-04 01:53:47 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:50.564388 | orchestrator | 2025-04-04 01:53:47 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:50.564532 | orchestrator | 2025-04-04 01:53:50 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:50.565032 | orchestrator | 2025-04-04 01:53:50 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:50.565069 | orchestrator | 2025-04-04 01:53:50 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:50.569655 | orchestrator | 2025-04-04 01:53:50 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:53.602235 | orchestrator | 2025-04-04 01:53:50 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:53.602477 | orchestrator | 2025-04-04 01:53:53 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:53.602567 | orchestrator | 2025-04-04 01:53:53 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:53.603098 | orchestrator | 2025-04-04 01:53:53 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:53.603846 | orchestrator | 2025-04-04 01:53:53 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:56.655773 | orchestrator | 2025-04-04 01:53:53 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:56.655910 | orchestrator | 2025-04-04 01:53:56 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:56.656517 | orchestrator | 2025-04-04 01:53:56 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:56.659377 | orchestrator | 2025-04-04 01:53:56 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:56.662888 | orchestrator | 2025-04-04 01:53:56 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:53:59.710264 | orchestrator | 2025-04-04 01:53:56 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:53:59.710464 | orchestrator | 2025-04-04 01:53:59 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:53:59.711394 | orchestrator | 2025-04-04 01:53:59 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:53:59.712352 | orchestrator | 2025-04-04 01:53:59 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:53:59.713367 | orchestrator | 2025-04-04 01:53:59 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:02.764116 | orchestrator | 2025-04-04 01:53:59 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:02.764365 | orchestrator | 2025-04-04 01:54:02 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:02.764465 | orchestrator | 2025-04-04 01:54:02 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:54:02.766606 | orchestrator | 2025-04-04 01:54:02 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:02.768426 | orchestrator | 2025-04-04 01:54:02 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:05.825101 | orchestrator | 2025-04-04 01:54:02 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:05.825254 | orchestrator | 2025-04-04 01:54:05 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:05.826673 | orchestrator | 2025-04-04 01:54:05 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:54:05.827407 | orchestrator | 2025-04-04 01:54:05 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:05.828582 | orchestrator | 2025-04-04 01:54:05 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:05.828718 | orchestrator | 2025-04-04 01:54:05 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:08.895155 | orchestrator | 2025-04-04 01:54:08 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:08.897947 | orchestrator | 2025-04-04 01:54:08 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:54:08.900410 | orchestrator | 2025-04-04 01:54:08 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:11.957157 | orchestrator | 2025-04-04 01:54:08 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:11.957335 | orchestrator | 2025-04-04 01:54:08 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:11.957377 | orchestrator | 2025-04-04 01:54:11 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:11.958466 | orchestrator | 2025-04-04 01:54:11 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:54:11.959214 | orchestrator | 2025-04-04 01:54:11 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:11.960277 | orchestrator | 2025-04-04 01:54:11 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:15.010738 | orchestrator | 2025-04-04 01:54:11 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:15.010901 | orchestrator | 2025-04-04 01:54:15 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:15.011803 | orchestrator | 2025-04-04 01:54:15 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:54:15.012982 | orchestrator | 2025-04-04 01:54:15 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:15.014131 | orchestrator | 2025-04-04 01:54:15 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:18.058070 | orchestrator | 2025-04-04 01:54:15 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:18.058260 | orchestrator | 2025-04-04 01:54:18 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:18.058956 | orchestrator | 2025-04-04 01:54:18 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:54:18.060650 | orchestrator | 2025-04-04 01:54:18 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:18.062210 | orchestrator | 2025-04-04 01:54:18 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:21.112660 | orchestrator | 2025-04-04 01:54:18 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:21.112805 | orchestrator | 2025-04-04 01:54:21 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:24.175370 | orchestrator | 2025-04-04 01:54:21 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:54:24.175513 | orchestrator | 2025-04-04 01:54:21 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:24.175584 | orchestrator | 2025-04-04 01:54:21 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:24.175600 | orchestrator | 2025-04-04 01:54:21 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:24.175635 | orchestrator | 2025-04-04 01:54:24 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:24.175722 | orchestrator | 2025-04-04 01:54:24 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:54:24.179209 | orchestrator | 2025-04-04 01:54:24 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:24.181756 | orchestrator | 2025-04-04 01:54:24 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:24.182100 | orchestrator | 2025-04-04 01:54:24 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:27.235765 | orchestrator | 2025-04-04 01:54:27 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:27.236256 | orchestrator | 2025-04-04 01:54:27 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state STARTED 2025-04-04 01:54:27.237758 | orchestrator | 2025-04-04 01:54:27 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:27.243139 | orchestrator | 2025-04-04 01:54:27 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:30.300371 | orchestrator | 2025-04-04 01:54:27 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:30.300522 | orchestrator | 2025-04-04 01:54:30 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:30.304369 | orchestrator | 2025-04-04 01:54:30.304420 | orchestrator | 2025-04-04 01:54:30.304436 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-04 01:54:30.304451 | orchestrator | 2025-04-04 01:54:30.304465 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-04 01:54:30.304479 | orchestrator | Friday 04 April 2025 01:51:50 +0000 (0:00:00.209) 0:00:00.209 ********** 2025-04-04 01:54:30.304493 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:54:30.304508 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:54:30.304521 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:54:30.304535 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.304548 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.304562 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.304577 | orchestrator | 2025-04-04 01:54:30.304591 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-04 01:54:30.304605 | orchestrator | Friday 04 April 2025 01:51:51 +0000 (0:00:00.614) 0:00:00.824 ********** 2025-04-04 01:54:30.304618 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-04-04 01:54:30.304632 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-04-04 01:54:30.304645 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-04-04 01:54:30.304659 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-04-04 01:54:30.304672 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-04-04 01:54:30.304685 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-04-04 01:54:30.304699 | orchestrator | 2025-04-04 01:54:30.304712 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-04-04 01:54:30.304725 | orchestrator | 2025-04-04 01:54:30.304739 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-04-04 01:54:30.304752 | orchestrator | Friday 04 April 2025 01:51:52 +0000 (0:00:01.146) 0:00:01.970 ********** 2025-04-04 01:54:30.304767 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:54:30.304782 | orchestrator | 2025-04-04 01:54:30.304795 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-04-04 01:54:30.304808 | orchestrator | Friday 04 April 2025 01:51:54 +0000 (0:00:01.727) 0:00:03.697 ********** 2025-04-04 01:54:30.304824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.304840 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.304876 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.304892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.304908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.304945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.304961 | orchestrator | 2025-04-04 01:54:30.304976 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-04-04 01:54:30.304991 | orchestrator | Friday 04 April 2025 01:51:56 +0000 (0:00:02.187) 0:00:05.885 ********** 2025-04-04 01:54:30.305010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305026 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305041 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305105 | orchestrator | 2025-04-04 01:54:30.305120 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-04-04 01:54:30.305134 | orchestrator | Friday 04 April 2025 01:51:59 +0000 (0:00:03.045) 0:00:08.930 ********** 2025-04-04 01:54:30.305148 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305163 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305192 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305251 | orchestrator | 2025-04-04 01:54:30.305264 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-04-04 01:54:30.305300 | orchestrator | Friday 04 April 2025 01:52:01 +0000 (0:00:01.468) 0:00:10.398 ********** 2025-04-04 01:54:30.305314 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305403 | orchestrator | 2025-04-04 01:54:30.305416 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-04-04 01:54:30.305429 | orchestrator | Friday 04 April 2025 01:52:03 +0000 (0:00:02.459) 0:00:12.857 ********** 2025-04-04 01:54:30.305441 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305472 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.305522 | orchestrator | 2025-04-04 01:54:30.305535 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-04-04 01:54:30.305548 | orchestrator | Friday 04 April 2025 01:52:05 +0000 (0:00:01.654) 0:00:14.512 ********** 2025-04-04 01:54:30.305560 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:54:30.305574 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:54:30.305586 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:54:30.305599 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:54:30.305611 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:54:30.305624 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:54:30.305636 | orchestrator | 2025-04-04 01:54:30.305649 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-04-04 01:54:30.305661 | orchestrator | Friday 04 April 2025 01:52:08 +0000 (0:00:02.953) 0:00:17.465 ********** 2025-04-04 01:54:30.305674 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-04-04 01:54:30.305687 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-04-04 01:54:30.305699 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-04-04 01:54:30.305716 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-04-04 01:54:30.305729 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-04-04 01:54:30.305742 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-04 01:54:30.305754 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-04 01:54:30.305766 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-04 01:54:30.305779 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-04-04 01:54:30.305791 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-04 01:54:30.305809 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-04 01:54:30.305822 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-04 01:54:30.305837 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-04 01:54:30.305850 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-04 01:54:30.305868 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-04 01:54:30.305881 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-04 01:54:30.305893 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-04 01:54:30.305906 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-04 01:54:30.305919 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-04 01:54:30.305932 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-04 01:54:30.305945 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-04 01:54:30.305957 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-04 01:54:30.305969 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-04 01:54:30.305982 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-04 01:54:30.305998 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-04 01:54:30.306011 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-04 01:54:30.306097 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-04 01:54:30.306111 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-04 01:54:30.306123 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-04 01:54:30.306135 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-04 01:54:30.306148 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-04 01:54:30.306160 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-04 01:54:30.306172 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-04 01:54:30.306185 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-04 01:54:30.306197 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-04 01:54:30.306210 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-04 01:54:30.306223 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-04 01:54:30.306236 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-04 01:54:30.306248 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-04 01:54:30.306269 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-04 01:54:30.306337 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-04-04 01:54:30.306353 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-04 01:54:30.306367 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-04-04 01:54:30.306379 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-04-04 01:54:30.306393 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-04 01:54:30.306406 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-04-04 01:54:30.306419 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-04 01:54:30.306431 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-04 01:54:30.306444 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-04-04 01:54:30.306456 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-04 01:54:30.306468 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-04-04 01:54:30.306481 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-04 01:54:30.306494 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-04 01:54:30.306507 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-04 01:54:30.306519 | orchestrator | 2025-04-04 01:54:30.306532 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-04 01:54:30.306544 | orchestrator | Friday 04 April 2025 01:52:29 +0000 (0:00:21.074) 0:00:38.539 ********** 2025-04-04 01:54:30.306557 | orchestrator | 2025-04-04 01:54:30.306569 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-04 01:54:30.306582 | orchestrator | Friday 04 April 2025 01:52:29 +0000 (0:00:00.071) 0:00:38.611 ********** 2025-04-04 01:54:30.306594 | orchestrator | 2025-04-04 01:54:30.306607 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-04 01:54:30.306619 | orchestrator | Friday 04 April 2025 01:52:29 +0000 (0:00:00.330) 0:00:38.942 ********** 2025-04-04 01:54:30.306631 | orchestrator | 2025-04-04 01:54:30.306644 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-04 01:54:30.306656 | orchestrator | Friday 04 April 2025 01:52:29 +0000 (0:00:00.068) 0:00:39.011 ********** 2025-04-04 01:54:30.306669 | orchestrator | 2025-04-04 01:54:30.306681 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-04 01:54:30.306693 | orchestrator | Friday 04 April 2025 01:52:29 +0000 (0:00:00.073) 0:00:39.084 ********** 2025-04-04 01:54:30.306706 | orchestrator | 2025-04-04 01:54:30.306718 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-04 01:54:30.306730 | orchestrator | Friday 04 April 2025 01:52:29 +0000 (0:00:00.147) 0:00:39.232 ********** 2025-04-04 01:54:30.306743 | orchestrator | 2025-04-04 01:54:30.306755 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-04-04 01:54:30.306767 | orchestrator | Friday 04 April 2025 01:52:30 +0000 (0:00:00.801) 0:00:40.034 ********** 2025-04-04 01:54:30.306787 | orchestrator | ok: [testbed-node-5] 2025-04-04 01:54:30.306798 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.306808 | orchestrator | ok: [testbed-node-4] 2025-04-04 01:54:30.306818 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.306828 | orchestrator | ok: [testbed-node-3] 2025-04-04 01:54:30.306838 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.306848 | orchestrator | 2025-04-04 01:54:30.306859 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-04-04 01:54:30.306869 | orchestrator | Friday 04 April 2025 01:52:33 +0000 (0:00:02.767) 0:00:42.801 ********** 2025-04-04 01:54:30.306879 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:54:30.306889 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:54:30.306899 | orchestrator | changed: [testbed-node-3] 2025-04-04 01:54:30.306909 | orchestrator | changed: [testbed-node-4] 2025-04-04 01:54:30.306919 | orchestrator | changed: [testbed-node-5] 2025-04-04 01:54:30.306929 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:54:30.306939 | orchestrator | 2025-04-04 01:54:30.306950 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-04-04 01:54:30.306960 | orchestrator | 2025-04-04 01:54:30.306970 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-04 01:54:30.306980 | orchestrator | Friday 04 April 2025 01:52:51 +0000 (0:00:18.493) 0:01:01.295 ********** 2025-04-04 01:54:30.306990 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:54:30.307000 | orchestrator | 2025-04-04 01:54:30.307010 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-04 01:54:30.307020 | orchestrator | Friday 04 April 2025 01:52:52 +0000 (0:00:00.830) 0:01:02.126 ********** 2025-04-04 01:54:30.307031 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:54:30.307041 | orchestrator | 2025-04-04 01:54:30.307056 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-04-04 01:54:30.307067 | orchestrator | Friday 04 April 2025 01:52:53 +0000 (0:00:01.218) 0:01:03.344 ********** 2025-04-04 01:54:30.307077 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.307087 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.307097 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.307107 | orchestrator | 2025-04-04 01:54:30.307118 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-04-04 01:54:30.307131 | orchestrator | Friday 04 April 2025 01:52:56 +0000 (0:00:02.165) 0:01:05.510 ********** 2025-04-04 01:54:30.307212 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.307224 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.307234 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.307244 | orchestrator | 2025-04-04 01:54:30.307254 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-04-04 01:54:30.307264 | orchestrator | Friday 04 April 2025 01:52:56 +0000 (0:00:00.519) 0:01:06.029 ********** 2025-04-04 01:54:30.307274 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.307300 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.307311 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.307321 | orchestrator | 2025-04-04 01:54:30.307331 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-04-04 01:54:30.307342 | orchestrator | Friday 04 April 2025 01:52:57 +0000 (0:00:01.035) 0:01:07.064 ********** 2025-04-04 01:54:30.307352 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.307362 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.307372 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.307382 | orchestrator | 2025-04-04 01:54:30.307392 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-04-04 01:54:30.307402 | orchestrator | Friday 04 April 2025 01:52:58 +0000 (0:00:00.911) 0:01:07.976 ********** 2025-04-04 01:54:30.307412 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.307423 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.307438 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.307448 | orchestrator | 2025-04-04 01:54:30.307458 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-04-04 01:54:30.307468 | orchestrator | Friday 04 April 2025 01:52:59 +0000 (0:00:01.218) 0:01:09.194 ********** 2025-04-04 01:54:30.307479 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.307489 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.307503 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.307514 | orchestrator | 2025-04-04 01:54:30.307524 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-04-04 01:54:30.307534 | orchestrator | Friday 04 April 2025 01:53:00 +0000 (0:00:00.694) 0:01:09.889 ********** 2025-04-04 01:54:30.307544 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.307555 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.307565 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.307575 | orchestrator | 2025-04-04 01:54:30.307585 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-04-04 01:54:30.307595 | orchestrator | Friday 04 April 2025 01:53:01 +0000 (0:00:01.010) 0:01:10.900 ********** 2025-04-04 01:54:30.307605 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.307615 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.307625 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.307635 | orchestrator | 2025-04-04 01:54:30.307646 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-04-04 01:54:30.307656 | orchestrator | Friday 04 April 2025 01:53:02 +0000 (0:00:00.582) 0:01:11.482 ********** 2025-04-04 01:54:30.307666 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.307676 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.307687 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.307697 | orchestrator | 2025-04-04 01:54:30.307707 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-04-04 01:54:30.307717 | orchestrator | Friday 04 April 2025 01:53:02 +0000 (0:00:00.418) 0:01:11.901 ********** 2025-04-04 01:54:30.307727 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.307737 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.307748 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.307758 | orchestrator | 2025-04-04 01:54:30.307768 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-04-04 01:54:30.307778 | orchestrator | Friday 04 April 2025 01:53:03 +0000 (0:00:00.673) 0:01:12.574 ********** 2025-04-04 01:54:30.307788 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.307798 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.307808 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.307818 | orchestrator | 2025-04-04 01:54:30.307828 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-04-04 01:54:30.307838 | orchestrator | Friday 04 April 2025 01:53:03 +0000 (0:00:00.787) 0:01:13.362 ********** 2025-04-04 01:54:30.307848 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.307858 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.307868 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.307878 | orchestrator | 2025-04-04 01:54:30.307889 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-04-04 01:54:30.307899 | orchestrator | Friday 04 April 2025 01:53:04 +0000 (0:00:00.616) 0:01:13.978 ********** 2025-04-04 01:54:30.307909 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.307919 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.307929 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.307939 | orchestrator | 2025-04-04 01:54:30.307949 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-04-04 01:54:30.307959 | orchestrator | Friday 04 April 2025 01:53:04 +0000 (0:00:00.346) 0:01:14.325 ********** 2025-04-04 01:54:30.307969 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.307979 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.307990 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.308006 | orchestrator | 2025-04-04 01:54:30.308016 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-04-04 01:54:30.308026 | orchestrator | Friday 04 April 2025 01:53:05 +0000 (0:00:00.761) 0:01:15.086 ********** 2025-04-04 01:54:30.308036 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.308047 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.308057 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.308067 | orchestrator | 2025-04-04 01:54:30.308081 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-04-04 01:54:30.308092 | orchestrator | Friday 04 April 2025 01:53:06 +0000 (0:00:00.685) 0:01:15.772 ********** 2025-04-04 01:54:30.308102 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.308112 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.308122 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.308133 | orchestrator | 2025-04-04 01:54:30.308143 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-04-04 01:54:30.308153 | orchestrator | Friday 04 April 2025 01:53:07 +0000 (0:00:00.694) 0:01:16.466 ********** 2025-04-04 01:54:30.308163 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.308173 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.308183 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.308194 | orchestrator | 2025-04-04 01:54:30.308204 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-04 01:54:30.308217 | orchestrator | Friday 04 April 2025 01:53:07 +0000 (0:00:00.409) 0:01:16.875 ********** 2025-04-04 01:54:30.308228 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:54:30.308238 | orchestrator | 2025-04-04 01:54:30.308248 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-04-04 01:54:30.308259 | orchestrator | Friday 04 April 2025 01:53:08 +0000 (0:00:01.153) 0:01:18.029 ********** 2025-04-04 01:54:30.308269 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.308279 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.308301 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.308312 | orchestrator | 2025-04-04 01:54:30.308322 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-04-04 01:54:30.308332 | orchestrator | Friday 04 April 2025 01:53:09 +0000 (0:00:01.029) 0:01:19.059 ********** 2025-04-04 01:54:30.308342 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.308353 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.308363 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.308373 | orchestrator | 2025-04-04 01:54:30.308383 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-04-04 01:54:30.308393 | orchestrator | Friday 04 April 2025 01:53:10 +0000 (0:00:00.931) 0:01:19.991 ********** 2025-04-04 01:54:30.308403 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.308413 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.308424 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.308434 | orchestrator | 2025-04-04 01:54:30.308444 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-04-04 01:54:30.308454 | orchestrator | Friday 04 April 2025 01:53:11 +0000 (0:00:00.730) 0:01:20.721 ********** 2025-04-04 01:54:30.308464 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.308474 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.308484 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.308494 | orchestrator | 2025-04-04 01:54:30.308504 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-04-04 01:54:30.308518 | orchestrator | Friday 04 April 2025 01:53:11 +0000 (0:00:00.626) 0:01:21.347 ********** 2025-04-04 01:54:30.308529 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.308539 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.308549 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.308559 | orchestrator | 2025-04-04 01:54:30.308569 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-04-04 01:54:30.308587 | orchestrator | Friday 04 April 2025 01:53:12 +0000 (0:00:00.388) 0:01:21.736 ********** 2025-04-04 01:54:30.308597 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.308607 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.308621 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.308631 | orchestrator | 2025-04-04 01:54:30.308641 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-04-04 01:54:30.308652 | orchestrator | Friday 04 April 2025 01:53:12 +0000 (0:00:00.643) 0:01:22.379 ********** 2025-04-04 01:54:30.308662 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.308672 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.308682 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.308692 | orchestrator | 2025-04-04 01:54:30.308702 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-04-04 01:54:30.308713 | orchestrator | Friday 04 April 2025 01:53:13 +0000 (0:00:00.588) 0:01:22.967 ********** 2025-04-04 01:54:30.308723 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.308733 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.308743 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.308753 | orchestrator | 2025-04-04 01:54:30.308763 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-04 01:54:30.308773 | orchestrator | Friday 04 April 2025 01:53:14 +0000 (0:00:01.160) 0:01:24.128 ********** 2025-04-04 01:54:30.308784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.308797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.308818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.308829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.308840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.308850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.308865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.308876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.308886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.308897 | orchestrator | 2025-04-04 01:54:30.308907 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-04 01:54:30.308917 | orchestrator | Friday 04 April 2025 01:53:16 +0000 (0:00:02.050) 0:01:26.178 ********** 2025-04-04 01:54:30.308928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.308938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.308948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.308963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.308978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.308989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.309004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.309014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.309024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.309035 | orchestrator | 2025-04-04 01:54:30.309045 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-04 01:54:30.309055 | orchestrator | Friday 04 April 2025 01:53:22 +0000 (0:00:05.504) 0:01:31.682 ********** 2025-04-04 01:54:30.309066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.309079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.309089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.309106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.309116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.309127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.309142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.309152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.309167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.309177 | orchestrator | 2025-04-04 01:54:30.309187 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-04 01:54:30.309198 | orchestrator | Friday 04 April 2025 01:53:25 +0000 (0:00:03.498) 0:01:35.180 ********** 2025-04-04 01:54:30.309208 | orchestrator | 2025-04-04 01:54:30.309218 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-04 01:54:30.309228 | orchestrator | Friday 04 April 2025 01:53:25 +0000 (0:00:00.090) 0:01:35.271 ********** 2025-04-04 01:54:30.309238 | orchestrator | 2025-04-04 01:54:30.309248 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-04 01:54:30.309258 | orchestrator | Friday 04 April 2025 01:53:25 +0000 (0:00:00.081) 0:01:35.352 ********** 2025-04-04 01:54:30.309269 | orchestrator | 2025-04-04 01:54:30.309279 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-04 01:54:30.309327 | orchestrator | Friday 04 April 2025 01:53:26 +0000 (0:00:00.310) 0:01:35.663 ********** 2025-04-04 01:54:30.309337 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:54:30.309348 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:54:30.309358 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:54:30.309368 | orchestrator | 2025-04-04 01:54:30.309378 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-04 01:54:30.309392 | orchestrator | Friday 04 April 2025 01:53:32 +0000 (0:00:06.637) 0:01:42.300 ********** 2025-04-04 01:54:30.309403 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:54:30.309413 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:54:30.309423 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:54:30.309433 | orchestrator | 2025-04-04 01:54:30.309443 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-04 01:54:30.309453 | orchestrator | Friday 04 April 2025 01:53:36 +0000 (0:00:03.129) 0:01:45.430 ********** 2025-04-04 01:54:30.309463 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:54:30.309474 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:54:30.309484 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:54:30.309494 | orchestrator | 2025-04-04 01:54:30.309504 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-04 01:54:30.309514 | orchestrator | Friday 04 April 2025 01:53:42 +0000 (0:00:06.786) 0:01:52.216 ********** 2025-04-04 01:54:30.309524 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.309534 | orchestrator | 2025-04-04 01:54:30.309545 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-04 01:54:30.309560 | orchestrator | Friday 04 April 2025 01:53:42 +0000 (0:00:00.152) 0:01:52.369 ********** 2025-04-04 01:54:30.309570 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.309580 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.309590 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.309601 | orchestrator | 2025-04-04 01:54:30.309616 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-04 01:54:30.309627 | orchestrator | Friday 04 April 2025 01:53:44 +0000 (0:00:01.281) 0:01:53.650 ********** 2025-04-04 01:54:30.309637 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.309647 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.309657 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:54:30.309667 | orchestrator | 2025-04-04 01:54:30.309677 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-04 01:54:30.309687 | orchestrator | Friday 04 April 2025 01:53:44 +0000 (0:00:00.611) 0:01:54.261 ********** 2025-04-04 01:54:30.309697 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.309707 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.309718 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.309728 | orchestrator | 2025-04-04 01:54:30.309738 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-04 01:54:30.309748 | orchestrator | Friday 04 April 2025 01:53:45 +0000 (0:00:01.049) 0:01:55.311 ********** 2025-04-04 01:54:30.309758 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.309768 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:54:30.309778 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.309789 | orchestrator | 2025-04-04 01:54:30.309799 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-04 01:54:30.309809 | orchestrator | Friday 04 April 2025 01:53:46 +0000 (0:00:00.629) 0:01:55.941 ********** 2025-04-04 01:54:30.309819 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.309829 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.309840 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.309848 | orchestrator | 2025-04-04 01:54:30.309857 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-04 01:54:30.309865 | orchestrator | Friday 04 April 2025 01:53:47 +0000 (0:00:01.204) 0:01:57.146 ********** 2025-04-04 01:54:30.309874 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.309882 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.309891 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.309899 | orchestrator | 2025-04-04 01:54:30.309908 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-04-04 01:54:30.309916 | orchestrator | Friday 04 April 2025 01:53:48 +0000 (0:00:00.795) 0:01:57.942 ********** 2025-04-04 01:54:30.309925 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.309933 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.309942 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.309950 | orchestrator | 2025-04-04 01:54:30.309959 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-04 01:54:30.309968 | orchestrator | Friday 04 April 2025 01:53:49 +0000 (0:00:00.574) 0:01:58.516 ********** 2025-04-04 01:54:30.309976 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.309985 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.309994 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310007 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310039 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310050 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310064 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310073 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310082 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310091 | orchestrator | 2025-04-04 01:54:30.310100 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-04 01:54:30.310109 | orchestrator | Friday 04 April 2025 01:53:51 +0000 (0:00:02.086) 0:02:00.603 ********** 2025-04-04 01:54:30.310117 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310126 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310140 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310161 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310192 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310210 | orchestrator | 2025-04-04 01:54:30.310219 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-04 01:54:30.310227 | orchestrator | Friday 04 April 2025 01:53:57 +0000 (0:00:05.950) 0:02:06.553 ********** 2025-04-04 01:54:30.310236 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310244 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310257 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310266 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310278 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310302 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310314 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310328 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310337 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-04 01:54:30.310346 | orchestrator | 2025-04-04 01:54:30.310355 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-04 01:54:30.310363 | orchestrator | Friday 04 April 2025 01:54:00 +0000 (0:00:03.399) 0:02:09.953 ********** 2025-04-04 01:54:30.310372 | orchestrator | 2025-04-04 01:54:30.310380 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-04 01:54:30.310389 | orchestrator | Friday 04 April 2025 01:54:00 +0000 (0:00:00.288) 0:02:10.241 ********** 2025-04-04 01:54:30.310398 | orchestrator | 2025-04-04 01:54:30.310406 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-04 01:54:30.310415 | orchestrator | Friday 04 April 2025 01:54:00 +0000 (0:00:00.081) 0:02:10.322 ********** 2025-04-04 01:54:30.310423 | orchestrator | 2025-04-04 01:54:30.310432 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-04 01:54:30.310440 | orchestrator | Friday 04 April 2025 01:54:01 +0000 (0:00:00.064) 0:02:10.387 ********** 2025-04-04 01:54:30.310456 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:54:30.310464 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:54:30.310473 | orchestrator | 2025-04-04 01:54:30.310481 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-04 01:54:30.310490 | orchestrator | Friday 04 April 2025 01:54:08 +0000 (0:00:07.125) 0:02:17.513 ********** 2025-04-04 01:54:30.310499 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:54:30.310507 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:54:30.310516 | orchestrator | 2025-04-04 01:54:30.310524 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-04 01:54:30.310533 | orchestrator | Friday 04 April 2025 01:54:14 +0000 (0:00:06.624) 0:02:24.138 ********** 2025-04-04 01:54:30.310541 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:54:30.310550 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:54:30.310558 | orchestrator | 2025-04-04 01:54:30.310567 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-04 01:54:30.310575 | orchestrator | Friday 04 April 2025 01:54:21 +0000 (0:00:07.026) 0:02:31.164 ********** 2025-04-04 01:54:30.310584 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:54:30.310592 | orchestrator | 2025-04-04 01:54:30.310601 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-04 01:54:30.310609 | orchestrator | Friday 04 April 2025 01:54:22 +0000 (0:00:00.389) 0:02:31.554 ********** 2025-04-04 01:54:30.310618 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.310626 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.310635 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.310644 | orchestrator | 2025-04-04 01:54:30.310652 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-04 01:54:30.310661 | orchestrator | Friday 04 April 2025 01:54:22 +0000 (0:00:00.814) 0:02:32.368 ********** 2025-04-04 01:54:30.310669 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:54:30.310678 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.310686 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.310695 | orchestrator | 2025-04-04 01:54:30.310704 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-04 01:54:30.310712 | orchestrator | Friday 04 April 2025 01:54:23 +0000 (0:00:00.652) 0:02:33.020 ********** 2025-04-04 01:54:30.310721 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.310736 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.310746 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.310754 | orchestrator | 2025-04-04 01:54:30.310763 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-04 01:54:30.310771 | orchestrator | Friday 04 April 2025 01:54:24 +0000 (0:00:01.094) 0:02:34.115 ********** 2025-04-04 01:54:30.310780 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:54:30.310789 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:54:30.310797 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:54:30.310806 | orchestrator | 2025-04-04 01:54:30.310814 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-04 01:54:30.310823 | orchestrator | Friday 04 April 2025 01:54:25 +0000 (0:00:00.967) 0:02:35.083 ********** 2025-04-04 01:54:30.310831 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.310840 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.310848 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.310857 | orchestrator | 2025-04-04 01:54:30.310865 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-04 01:54:30.310874 | orchestrator | Friday 04 April 2025 01:54:26 +0000 (0:00:01.060) 0:02:36.144 ********** 2025-04-04 01:54:30.310882 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:54:30.310891 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:54:30.310899 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:54:30.310908 | orchestrator | 2025-04-04 01:54:30.310916 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 01:54:30.310925 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-04-04 01:54:30.310939 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-04 01:54:30.310951 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-04 01:54:33.355008 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:54:33.355142 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:54:33.355159 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-04 01:54:33.355172 | orchestrator | 2025-04-04 01:54:33.355185 | orchestrator | 2025-04-04 01:54:33.355199 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-04 01:54:33.355213 | orchestrator | Friday 04 April 2025 01:54:28 +0000 (0:00:02.229) 0:02:38.373 ********** 2025-04-04 01:54:33.355225 | orchestrator | =============================================================================== 2025-04-04 01:54:33.355238 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.07s 2025-04-04 01:54:33.355251 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 18.49s 2025-04-04 01:54:33.355263 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.81s 2025-04-04 01:54:33.355275 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.76s 2025-04-04 01:54:33.355317 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.75s 2025-04-04 01:54:33.355330 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.95s 2025-04-04 01:54:33.355343 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.50s 2025-04-04 01:54:33.355355 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.50s 2025-04-04 01:54:33.355375 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.40s 2025-04-04 01:54:33.355388 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 3.05s 2025-04-04 01:54:33.355401 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.95s 2025-04-04 01:54:33.355413 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.77s 2025-04-04 01:54:33.355425 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.46s 2025-04-04 01:54:33.355438 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 2.23s 2025-04-04 01:54:33.355450 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.19s 2025-04-04 01:54:33.355463 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 2.17s 2025-04-04 01:54:33.355475 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.09s 2025-04-04 01:54:33.355488 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.05s 2025-04-04 01:54:33.355500 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.73s 2025-04-04 01:54:33.355512 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.65s 2025-04-04 01:54:33.355526 | orchestrator | 2025-04-04 01:54:30 | INFO  | Task 96516d64-daf5-498e-9e4b-0a75c66f9fef is in state SUCCESS 2025-04-04 01:54:33.355539 | orchestrator | 2025-04-04 01:54:30 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:33.355552 | orchestrator | 2025-04-04 01:54:30 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:33.355564 | orchestrator | 2025-04-04 01:54:30 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:33.355619 | orchestrator | 2025-04-04 01:54:33 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:33.355871 | orchestrator | 2025-04-04 01:54:33 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:33.355997 | orchestrator | 2025-04-04 01:54:33 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:33.356113 | orchestrator | 2025-04-04 01:54:33 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:36.400052 | orchestrator | 2025-04-04 01:54:36 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:36.401204 | orchestrator | 2025-04-04 01:54:36 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:36.404614 | orchestrator | 2025-04-04 01:54:36 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:39.469631 | orchestrator | 2025-04-04 01:54:36 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:39.469782 | orchestrator | 2025-04-04 01:54:39 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:39.470914 | orchestrator | 2025-04-04 01:54:39 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:39.472008 | orchestrator | 2025-04-04 01:54:39 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:39.472410 | orchestrator | 2025-04-04 01:54:39 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:42.528056 | orchestrator | 2025-04-04 01:54:42 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:42.528729 | orchestrator | 2025-04-04 01:54:42 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:42.529755 | orchestrator | 2025-04-04 01:54:42 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:42.529983 | orchestrator | 2025-04-04 01:54:42 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:45.593776 | orchestrator | 2025-04-04 01:54:45 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:48.652347 | orchestrator | 2025-04-04 01:54:45 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:48.652497 | orchestrator | 2025-04-04 01:54:45 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:48.652519 | orchestrator | 2025-04-04 01:54:45 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:48.652555 | orchestrator | 2025-04-04 01:54:48 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:48.653486 | orchestrator | 2025-04-04 01:54:48 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:48.653529 | orchestrator | 2025-04-04 01:54:48 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:51.711391 | orchestrator | 2025-04-04 01:54:48 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:51.711582 | orchestrator | 2025-04-04 01:54:51 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:51.712655 | orchestrator | 2025-04-04 01:54:51 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:51.712733 | orchestrator | 2025-04-04 01:54:51 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:51.712799 | orchestrator | 2025-04-04 01:54:51 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:54.777722 | orchestrator | 2025-04-04 01:54:54 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:54.778314 | orchestrator | 2025-04-04 01:54:54 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:54.779430 | orchestrator | 2025-04-04 01:54:54 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:54:57.832942 | orchestrator | 2025-04-04 01:54:54 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:54:57.833103 | orchestrator | 2025-04-04 01:54:57 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:54:57.834174 | orchestrator | 2025-04-04 01:54:57 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:54:57.836442 | orchestrator | 2025-04-04 01:54:57 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:00.901189 | orchestrator | 2025-04-04 01:54:57 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:00.901382 | orchestrator | 2025-04-04 01:55:00 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:00.902851 | orchestrator | 2025-04-04 01:55:00 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:00.903578 | orchestrator | 2025-04-04 01:55:00 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:00.904225 | orchestrator | 2025-04-04 01:55:00 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:03.965065 | orchestrator | 2025-04-04 01:55:03 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:03.968737 | orchestrator | 2025-04-04 01:55:03 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:03.970811 | orchestrator | 2025-04-04 01:55:03 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:03.971596 | orchestrator | 2025-04-04 01:55:03 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:07.021015 | orchestrator | 2025-04-04 01:55:07 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:07.022268 | orchestrator | 2025-04-04 01:55:07 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:07.023585 | orchestrator | 2025-04-04 01:55:07 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:10.079441 | orchestrator | 2025-04-04 01:55:07 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:10.079614 | orchestrator | 2025-04-04 01:55:10 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:10.081603 | orchestrator | 2025-04-04 01:55:10 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:10.083702 | orchestrator | 2025-04-04 01:55:10 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:13.152441 | orchestrator | 2025-04-04 01:55:10 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:13.152574 | orchestrator | 2025-04-04 01:55:13 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:13.153011 | orchestrator | 2025-04-04 01:55:13 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:13.153922 | orchestrator | 2025-04-04 01:55:13 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:16.230983 | orchestrator | 2025-04-04 01:55:13 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:16.231118 | orchestrator | 2025-04-04 01:55:16 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:16.232879 | orchestrator | 2025-04-04 01:55:16 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:16.234268 | orchestrator | 2025-04-04 01:55:16 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:16.234554 | orchestrator | 2025-04-04 01:55:16 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:19.315485 | orchestrator | 2025-04-04 01:55:19 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:19.318366 | orchestrator | 2025-04-04 01:55:19 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:19.321080 | orchestrator | 2025-04-04 01:55:19 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:22.394867 | orchestrator | 2025-04-04 01:55:19 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:22.395032 | orchestrator | 2025-04-04 01:55:22 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:22.396419 | orchestrator | 2025-04-04 01:55:22 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:22.397526 | orchestrator | 2025-04-04 01:55:22 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:25.479374 | orchestrator | 2025-04-04 01:55:22 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:25.479491 | orchestrator | 2025-04-04 01:55:25 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:25.484000 | orchestrator | 2025-04-04 01:55:25 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:25.488515 | orchestrator | 2025-04-04 01:55:25 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:28.559166 | orchestrator | 2025-04-04 01:55:25 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:28.559363 | orchestrator | 2025-04-04 01:55:28 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:28.560083 | orchestrator | 2025-04-04 01:55:28 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:28.561685 | orchestrator | 2025-04-04 01:55:28 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:28.561859 | orchestrator | 2025-04-04 01:55:28 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:31.626774 | orchestrator | 2025-04-04 01:55:31 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:31.627489 | orchestrator | 2025-04-04 01:55:31 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:31.629361 | orchestrator | 2025-04-04 01:55:31 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:34.690739 | orchestrator | 2025-04-04 01:55:31 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:34.690864 | orchestrator | 2025-04-04 01:55:34 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:34.691115 | orchestrator | 2025-04-04 01:55:34 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:34.692861 | orchestrator | 2025-04-04 01:55:34 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:37.741121 | orchestrator | 2025-04-04 01:55:34 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:37.741278 | orchestrator | 2025-04-04 01:55:37 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:37.742704 | orchestrator | 2025-04-04 01:55:37 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:37.744837 | orchestrator | 2025-04-04 01:55:37 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:40.797222 | orchestrator | 2025-04-04 01:55:37 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:40.797500 | orchestrator | 2025-04-04 01:55:40 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:40.797601 | orchestrator | 2025-04-04 01:55:40 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:40.799041 | orchestrator | 2025-04-04 01:55:40 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:43.864470 | orchestrator | 2025-04-04 01:55:40 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:43.864628 | orchestrator | 2025-04-04 01:55:43 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:43.866431 | orchestrator | 2025-04-04 01:55:43 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:43.866467 | orchestrator | 2025-04-04 01:55:43 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:43.866959 | orchestrator | 2025-04-04 01:55:43 | INFO  | Task 260b2be6-bbdd-472f-b200-645759bff92a is in state STARTED 2025-04-04 01:55:46.937141 | orchestrator | 2025-04-04 01:55:43 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:46.937333 | orchestrator | 2025-04-04 01:55:46 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:49.980429 | orchestrator | 2025-04-04 01:55:46 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:49.980561 | orchestrator | 2025-04-04 01:55:46 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:49.980578 | orchestrator | 2025-04-04 01:55:46 | INFO  | Task 260b2be6-bbdd-472f-b200-645759bff92a is in state STARTED 2025-04-04 01:55:49.980593 | orchestrator | 2025-04-04 01:55:46 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:49.980625 | orchestrator | 2025-04-04 01:55:49 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:49.981484 | orchestrator | 2025-04-04 01:55:49 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:49.984093 | orchestrator | 2025-04-04 01:55:49 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:49.984916 | orchestrator | 2025-04-04 01:55:49 | INFO  | Task 260b2be6-bbdd-472f-b200-645759bff92a is in state STARTED 2025-04-04 01:55:49.985029 | orchestrator | 2025-04-04 01:55:49 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:53.069307 | orchestrator | 2025-04-04 01:55:53 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:53.073773 | orchestrator | 2025-04-04 01:55:53 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:53.076816 | orchestrator | 2025-04-04 01:55:53 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:53.078101 | orchestrator | 2025-04-04 01:55:53 | INFO  | Task 260b2be6-bbdd-472f-b200-645759bff92a is in state STARTED 2025-04-04 01:55:56.164914 | orchestrator | 2025-04-04 01:55:53 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:56.165123 | orchestrator | 2025-04-04 01:55:56 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:56.165218 | orchestrator | 2025-04-04 01:55:56 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:56.165894 | orchestrator | 2025-04-04 01:55:56 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:56.166529 | orchestrator | 2025-04-04 01:55:56 | INFO  | Task 260b2be6-bbdd-472f-b200-645759bff92a is in state STARTED 2025-04-04 01:55:59.228070 | orchestrator | 2025-04-04 01:55:56 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:55:59.228228 | orchestrator | 2025-04-04 01:55:59 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:55:59.228419 | orchestrator | 2025-04-04 01:55:59 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:55:59.229380 | orchestrator | 2025-04-04 01:55:59 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:55:59.229868 | orchestrator | 2025-04-04 01:55:59 | INFO  | Task 260b2be6-bbdd-472f-b200-645759bff92a is in state SUCCESS 2025-04-04 01:56:02.284679 | orchestrator | 2025-04-04 01:55:59 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:02.284827 | orchestrator | 2025-04-04 01:56:02 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:02.286203 | orchestrator | 2025-04-04 01:56:02 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:02.288249 | orchestrator | 2025-04-04 01:56:02 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:02.288566 | orchestrator | 2025-04-04 01:56:02 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:05.331255 | orchestrator | 2025-04-04 01:56:05 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:05.333006 | orchestrator | 2025-04-04 01:56:05 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:05.333048 | orchestrator | 2025-04-04 01:56:05 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:08.392058 | orchestrator | 2025-04-04 01:56:05 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:08.392210 | orchestrator | 2025-04-04 01:56:08 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:08.393550 | orchestrator | 2025-04-04 01:56:08 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:08.394341 | orchestrator | 2025-04-04 01:56:08 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:11.451190 | orchestrator | 2025-04-04 01:56:08 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:11.451410 | orchestrator | 2025-04-04 01:56:11 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:11.454810 | orchestrator | 2025-04-04 01:56:11 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:11.455490 | orchestrator | 2025-04-04 01:56:11 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:14.522575 | orchestrator | 2025-04-04 01:56:11 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:14.522736 | orchestrator | 2025-04-04 01:56:14 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:14.524639 | orchestrator | 2025-04-04 01:56:14 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:14.524671 | orchestrator | 2025-04-04 01:56:14 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:17.583929 | orchestrator | 2025-04-04 01:56:14 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:17.584090 | orchestrator | 2025-04-04 01:56:17 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:17.584826 | orchestrator | 2025-04-04 01:56:17 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:17.586595 | orchestrator | 2025-04-04 01:56:17 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:17.586808 | orchestrator | 2025-04-04 01:56:17 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:20.647540 | orchestrator | 2025-04-04 01:56:20 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:20.649497 | orchestrator | 2025-04-04 01:56:20 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:20.651906 | orchestrator | 2025-04-04 01:56:20 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:20.654079 | orchestrator | 2025-04-04 01:56:20 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:23.718553 | orchestrator | 2025-04-04 01:56:23 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:23.718786 | orchestrator | 2025-04-04 01:56:23 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:23.719698 | orchestrator | 2025-04-04 01:56:23 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:26.796676 | orchestrator | 2025-04-04 01:56:23 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:26.796819 | orchestrator | 2025-04-04 01:56:26 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:26.798113 | orchestrator | 2025-04-04 01:56:26 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:26.800390 | orchestrator | 2025-04-04 01:56:26 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:29.847717 | orchestrator | 2025-04-04 01:56:26 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:29.847876 | orchestrator | 2025-04-04 01:56:29 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:29.849169 | orchestrator | 2025-04-04 01:56:29 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:29.851104 | orchestrator | 2025-04-04 01:56:29 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:32.912593 | orchestrator | 2025-04-04 01:56:29 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:32.912742 | orchestrator | 2025-04-04 01:56:32 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:32.914740 | orchestrator | 2025-04-04 01:56:32 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:32.919504 | orchestrator | 2025-04-04 01:56:32 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:35.972187 | orchestrator | 2025-04-04 01:56:32 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:35.972381 | orchestrator | 2025-04-04 01:56:35 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:35.973202 | orchestrator | 2025-04-04 01:56:35 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:35.974883 | orchestrator | 2025-04-04 01:56:35 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:39.033344 | orchestrator | 2025-04-04 01:56:35 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:39.033491 | orchestrator | 2025-04-04 01:56:39 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:39.036386 | orchestrator | 2025-04-04 01:56:39 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:39.037845 | orchestrator | 2025-04-04 01:56:39 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:42.112226 | orchestrator | 2025-04-04 01:56:39 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:42.112434 | orchestrator | 2025-04-04 01:56:42 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:42.117764 | orchestrator | 2025-04-04 01:56:42 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:45.178411 | orchestrator | 2025-04-04 01:56:42 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:45.178522 | orchestrator | 2025-04-04 01:56:42 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:45.178551 | orchestrator | 2025-04-04 01:56:45 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:45.180760 | orchestrator | 2025-04-04 01:56:45 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:45.182904 | orchestrator | 2025-04-04 01:56:45 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:45.182979 | orchestrator | 2025-04-04 01:56:45 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:48.243523 | orchestrator | 2025-04-04 01:56:48 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:48.245821 | orchestrator | 2025-04-04 01:56:48 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:48.247366 | orchestrator | 2025-04-04 01:56:48 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:51.332742 | orchestrator | 2025-04-04 01:56:48 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:51.332898 | orchestrator | 2025-04-04 01:56:51 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:51.333231 | orchestrator | 2025-04-04 01:56:51 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:51.335758 | orchestrator | 2025-04-04 01:56:51 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:54.419057 | orchestrator | 2025-04-04 01:56:51 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:54.419204 | orchestrator | 2025-04-04 01:56:54 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:54.420809 | orchestrator | 2025-04-04 01:56:54 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:54.423310 | orchestrator | 2025-04-04 01:56:54 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:56:57.483072 | orchestrator | 2025-04-04 01:56:54 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:56:57.483223 | orchestrator | 2025-04-04 01:56:57 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:56:57.483821 | orchestrator | 2025-04-04 01:56:57 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:56:57.483861 | orchestrator | 2025-04-04 01:56:57 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:00.541921 | orchestrator | 2025-04-04 01:56:57 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:00.542146 | orchestrator | 2025-04-04 01:57:00 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:00.543819 | orchestrator | 2025-04-04 01:57:00 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:00.547196 | orchestrator | 2025-04-04 01:57:00 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:03.597624 | orchestrator | 2025-04-04 01:57:00 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:03.597782 | orchestrator | 2025-04-04 01:57:03 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:03.599190 | orchestrator | 2025-04-04 01:57:03 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:03.601625 | orchestrator | 2025-04-04 01:57:03 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:06.646395 | orchestrator | 2025-04-04 01:57:03 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:06.646554 | orchestrator | 2025-04-04 01:57:06 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:06.647322 | orchestrator | 2025-04-04 01:57:06 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:06.650524 | orchestrator | 2025-04-04 01:57:06 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:09.695709 | orchestrator | 2025-04-04 01:57:06 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:09.695861 | orchestrator | 2025-04-04 01:57:09 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:12.764144 | orchestrator | 2025-04-04 01:57:09 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:12.764272 | orchestrator | 2025-04-04 01:57:09 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:12.764339 | orchestrator | 2025-04-04 01:57:09 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:12.764374 | orchestrator | 2025-04-04 01:57:12 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:12.764920 | orchestrator | 2025-04-04 01:57:12 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:12.767193 | orchestrator | 2025-04-04 01:57:12 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:15.816436 | orchestrator | 2025-04-04 01:57:12 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:15.816569 | orchestrator | 2025-04-04 01:57:15 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:15.818544 | orchestrator | 2025-04-04 01:57:15 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:15.821025 | orchestrator | 2025-04-04 01:57:15 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:18.874904 | orchestrator | 2025-04-04 01:57:15 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:18.875123 | orchestrator | 2025-04-04 01:57:18 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:18.875230 | orchestrator | 2025-04-04 01:57:18 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:18.881489 | orchestrator | 2025-04-04 01:57:18 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:21.936059 | orchestrator | 2025-04-04 01:57:18 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:21.936241 | orchestrator | 2025-04-04 01:57:21 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:21.937534 | orchestrator | 2025-04-04 01:57:21 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:21.940943 | orchestrator | 2025-04-04 01:57:21 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:24.996136 | orchestrator | 2025-04-04 01:57:21 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:24.996338 | orchestrator | 2025-04-04 01:57:24 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:24.999828 | orchestrator | 2025-04-04 01:57:24 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:25.001964 | orchestrator | 2025-04-04 01:57:25 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:28.054879 | orchestrator | 2025-04-04 01:57:25 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:28.055012 | orchestrator | 2025-04-04 01:57:28 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:28.057851 | orchestrator | 2025-04-04 01:57:28 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:28.059599 | orchestrator | 2025-04-04 01:57:28 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:28.060123 | orchestrator | 2025-04-04 01:57:28 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:31.116381 | orchestrator | 2025-04-04 01:57:31 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:31.116933 | orchestrator | 2025-04-04 01:57:31 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:31.118518 | orchestrator | 2025-04-04 01:57:31 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:34.168098 | orchestrator | 2025-04-04 01:57:31 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:34.168252 | orchestrator | 2025-04-04 01:57:34 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:34.169638 | orchestrator | 2025-04-04 01:57:34 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:34.171511 | orchestrator | 2025-04-04 01:57:34 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:34.171575 | orchestrator | 2025-04-04 01:57:34 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:37.230895 | orchestrator | 2025-04-04 01:57:37 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:37.231558 | orchestrator | 2025-04-04 01:57:37 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:37.232375 | orchestrator | 2025-04-04 01:57:37 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:37.232570 | orchestrator | 2025-04-04 01:57:37 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:40.351248 | orchestrator | 2025-04-04 01:57:40 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:40.352372 | orchestrator | 2025-04-04 01:57:40 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:40.352410 | orchestrator | 2025-04-04 01:57:40 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:40.354233 | orchestrator | 2025-04-04 01:57:40 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:43.418242 | orchestrator | 2025-04-04 01:57:43 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:43.419607 | orchestrator | 2025-04-04 01:57:43 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:43.421435 | orchestrator | 2025-04-04 01:57:43 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:43.423097 | orchestrator | 2025-04-04 01:57:43 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:46.473857 | orchestrator | 2025-04-04 01:57:46 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:46.474243 | orchestrator | 2025-04-04 01:57:46 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:46.475534 | orchestrator | 2025-04-04 01:57:46 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:49.535115 | orchestrator | 2025-04-04 01:57:46 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:49.535323 | orchestrator | 2025-04-04 01:57:49 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:49.535749 | orchestrator | 2025-04-04 01:57:49 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:49.537182 | orchestrator | 2025-04-04 01:57:49 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:52.594169 | orchestrator | 2025-04-04 01:57:49 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:52.594425 | orchestrator | 2025-04-04 01:57:52 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:52.594525 | orchestrator | 2025-04-04 01:57:52 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:52.595644 | orchestrator | 2025-04-04 01:57:52 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:55.641450 | orchestrator | 2025-04-04 01:57:52 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:55.641595 | orchestrator | 2025-04-04 01:57:55 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:55.643881 | orchestrator | 2025-04-04 01:57:55 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:55.646841 | orchestrator | 2025-04-04 01:57:55 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:57:55.647324 | orchestrator | 2025-04-04 01:57:55 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:57:58.705748 | orchestrator | 2025-04-04 01:57:58 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:57:58.706432 | orchestrator | 2025-04-04 01:57:58 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:57:58.706481 | orchestrator | 2025-04-04 01:57:58 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:01.763302 | orchestrator | 2025-04-04 01:57:58 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:01.763440 | orchestrator | 2025-04-04 01:58:01 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:01.764165 | orchestrator | 2025-04-04 01:58:01 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:01.765886 | orchestrator | 2025-04-04 01:58:01 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:04.812124 | orchestrator | 2025-04-04 01:58:01 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:04.812274 | orchestrator | 2025-04-04 01:58:04 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:04.812407 | orchestrator | 2025-04-04 01:58:04 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:04.815305 | orchestrator | 2025-04-04 01:58:04 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:07.894656 | orchestrator | 2025-04-04 01:58:04 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:07.894834 | orchestrator | 2025-04-04 01:58:07 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:07.897534 | orchestrator | 2025-04-04 01:58:07 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:07.897663 | orchestrator | 2025-04-04 01:58:07 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:10.952554 | orchestrator | 2025-04-04 01:58:07 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:10.952705 | orchestrator | 2025-04-04 01:58:10 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:10.959502 | orchestrator | 2025-04-04 01:58:10 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:10.960614 | orchestrator | 2025-04-04 01:58:10 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:14.028071 | orchestrator | 2025-04-04 01:58:10 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:14.028226 | orchestrator | 2025-04-04 01:58:14 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:14.030478 | orchestrator | 2025-04-04 01:58:14 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:14.034520 | orchestrator | 2025-04-04 01:58:14 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:17.104008 | orchestrator | 2025-04-04 01:58:14 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:17.104204 | orchestrator | 2025-04-04 01:58:17 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:17.104694 | orchestrator | 2025-04-04 01:58:17 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:17.106600 | orchestrator | 2025-04-04 01:58:17 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:20.162955 | orchestrator | 2025-04-04 01:58:17 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:20.163099 | orchestrator | 2025-04-04 01:58:20 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:20.163650 | orchestrator | 2025-04-04 01:58:20 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:20.166385 | orchestrator | 2025-04-04 01:58:20 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:23.234276 | orchestrator | 2025-04-04 01:58:20 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:23.234466 | orchestrator | 2025-04-04 01:58:23 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:23.235180 | orchestrator | 2025-04-04 01:58:23 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:23.237451 | orchestrator | 2025-04-04 01:58:23 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:26.283719 | orchestrator | 2025-04-04 01:58:23 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:26.283857 | orchestrator | 2025-04-04 01:58:26 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:26.287771 | orchestrator | 2025-04-04 01:58:26 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:29.344389 | orchestrator | 2025-04-04 01:58:26 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:29.344517 | orchestrator | 2025-04-04 01:58:26 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:29.344584 | orchestrator | 2025-04-04 01:58:29 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:29.345117 | orchestrator | 2025-04-04 01:58:29 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:29.347271 | orchestrator | 2025-04-04 01:58:29 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:32.395397 | orchestrator | 2025-04-04 01:58:29 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:32.395532 | orchestrator | 2025-04-04 01:58:32 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:32.396677 | orchestrator | 2025-04-04 01:58:32 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:32.397844 | orchestrator | 2025-04-04 01:58:32 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:32.397921 | orchestrator | 2025-04-04 01:58:32 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:35.448253 | orchestrator | 2025-04-04 01:58:35 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:35.449796 | orchestrator | 2025-04-04 01:58:35 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:35.452657 | orchestrator | 2025-04-04 01:58:35 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:38.513322 | orchestrator | 2025-04-04 01:58:35 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:38.513460 | orchestrator | 2025-04-04 01:58:38 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:38.513587 | orchestrator | 2025-04-04 01:58:38 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:38.515034 | orchestrator | 2025-04-04 01:58:38 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:38.515654 | orchestrator | 2025-04-04 01:58:38 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:41.577267 | orchestrator | 2025-04-04 01:58:41 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:44.632680 | orchestrator | 2025-04-04 01:58:41 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:44.632844 | orchestrator | 2025-04-04 01:58:41 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:44.632865 | orchestrator | 2025-04-04 01:58:41 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:44.632918 | orchestrator | 2025-04-04 01:58:44 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:44.633001 | orchestrator | 2025-04-04 01:58:44 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:44.637083 | orchestrator | 2025-04-04 01:58:44 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:47.691782 | orchestrator | 2025-04-04 01:58:44 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:47.691925 | orchestrator | 2025-04-04 01:58:47 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:47.692055 | orchestrator | 2025-04-04 01:58:47 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:47.692820 | orchestrator | 2025-04-04 01:58:47 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:50.744156 | orchestrator | 2025-04-04 01:58:47 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:50.744337 | orchestrator | 2025-04-04 01:58:50 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:50.744771 | orchestrator | 2025-04-04 01:58:50 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:50.745701 | orchestrator | 2025-04-04 01:58:50 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:53.796575 | orchestrator | 2025-04-04 01:58:50 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:53.796737 | orchestrator | 2025-04-04 01:58:53 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:53.798252 | orchestrator | 2025-04-04 01:58:53 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:53.800052 | orchestrator | 2025-04-04 01:58:53 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:53.800969 | orchestrator | 2025-04-04 01:58:53 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:56.852481 | orchestrator | 2025-04-04 01:58:56 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:59.915903 | orchestrator | 2025-04-04 01:58:56 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:59.916039 | orchestrator | 2025-04-04 01:58:56 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:58:59.916058 | orchestrator | 2025-04-04 01:58:56 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:58:59.916091 | orchestrator | 2025-04-04 01:58:59 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:58:59.918258 | orchestrator | 2025-04-04 01:58:59 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:58:59.922343 | orchestrator | 2025-04-04 01:58:59 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:02.982260 | orchestrator | 2025-04-04 01:58:59 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:02.982457 | orchestrator | 2025-04-04 01:59:02 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:02.982832 | orchestrator | 2025-04-04 01:59:02 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:59:02.982864 | orchestrator | 2025-04-04 01:59:02 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:06.043908 | orchestrator | 2025-04-04 01:59:02 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:06.044069 | orchestrator | 2025-04-04 01:59:06 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:06.050612 | orchestrator | 2025-04-04 01:59:06 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:59:06.052714 | orchestrator | 2025-04-04 01:59:06 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:09.124364 | orchestrator | 2025-04-04 01:59:06 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:09.124532 | orchestrator | 2025-04-04 01:59:09 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:09.124615 | orchestrator | 2025-04-04 01:59:09 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:59:09.125571 | orchestrator | 2025-04-04 01:59:09 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:09.125671 | orchestrator | 2025-04-04 01:59:09 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:12.171858 | orchestrator | 2025-04-04 01:59:12 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:12.172421 | orchestrator | 2025-04-04 01:59:12 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:59:12.175935 | orchestrator | 2025-04-04 01:59:12 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:15.233930 | orchestrator | 2025-04-04 01:59:12 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:15.234129 | orchestrator | 2025-04-04 01:59:15 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:15.235915 | orchestrator | 2025-04-04 01:59:15 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:59:18.279487 | orchestrator | 2025-04-04 01:59:15 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:18.279680 | orchestrator | 2025-04-04 01:59:15 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:18.279725 | orchestrator | 2025-04-04 01:59:18 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:18.279813 | orchestrator | 2025-04-04 01:59:18 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:59:18.280651 | orchestrator | 2025-04-04 01:59:18 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:21.329904 | orchestrator | 2025-04-04 01:59:18 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:21.330112 | orchestrator | 2025-04-04 01:59:21 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:21.330883 | orchestrator | 2025-04-04 01:59:21 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:59:21.337716 | orchestrator | 2025-04-04 01:59:21 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:24.387913 | orchestrator | 2025-04-04 01:59:21 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:24.388114 | orchestrator | 2025-04-04 01:59:24 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:24.388211 | orchestrator | 2025-04-04 01:59:24 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:59:24.388830 | orchestrator | 2025-04-04 01:59:24 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:27.448839 | orchestrator | 2025-04-04 01:59:24 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:27.448986 | orchestrator | 2025-04-04 01:59:27 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:27.449495 | orchestrator | 2025-04-04 01:59:27 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:59:27.451032 | orchestrator | 2025-04-04 01:59:27 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:27.451194 | orchestrator | 2025-04-04 01:59:27 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:30.507632 | orchestrator | 2025-04-04 01:59:30 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:30.507820 | orchestrator | 2025-04-04 01:59:30 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:59:30.508654 | orchestrator | 2025-04-04 01:59:30 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:33.568445 | orchestrator | 2025-04-04 01:59:30 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:33.568591 | orchestrator | 2025-04-04 01:59:33 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:33.571129 | orchestrator | 2025-04-04 01:59:33 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:59:33.572950 | orchestrator | 2025-04-04 01:59:33 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:33.573028 | orchestrator | 2025-04-04 01:59:33 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:36.623517 | orchestrator | 2025-04-04 01:59:36 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:36.625815 | orchestrator | 2025-04-04 01:59:36 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:59:36.627308 | orchestrator | 2025-04-04 01:59:36 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:36.627466 | orchestrator | 2025-04-04 01:59:36 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:39.697159 | orchestrator | 2025-04-04 01:59:39 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:39.699682 | orchestrator | 2025-04-04 01:59:39 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:59:39.700854 | orchestrator | 2025-04-04 01:59:39 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:39.700979 | orchestrator | 2025-04-04 01:59:39 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:42.753666 | orchestrator | 2025-04-04 01:59:42 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:42.753904 | orchestrator | 2025-04-04 01:59:42 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state STARTED 2025-04-04 01:59:42.756414 | orchestrator | 2025-04-04 01:59:42 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:45.805819 | orchestrator | 2025-04-04 01:59:42 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:45.805963 | orchestrator | 2025-04-04 01:59:45 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:45.817949 | orchestrator | 2025-04-04 01:59:45.817995 | orchestrator | None 2025-04-04 01:59:45.818011 | orchestrator | 2025-04-04 01:59:45.818073 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-04 01:59:45.818105 | orchestrator | 2025-04-04 01:59:45.818119 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-04 01:59:45.818134 | orchestrator | Friday 04 April 2025 01:50:12 +0000 (0:00:00.473) 0:00:00.473 ********** 2025-04-04 01:59:45.818148 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.818164 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.818178 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.818254 | orchestrator | 2025-04-04 01:59:45.818271 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-04 01:59:45.818367 | orchestrator | Friday 04 April 2025 01:50:12 +0000 (0:00:00.696) 0:00:01.170 ********** 2025-04-04 01:59:45.818386 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-04-04 01:59:45.818401 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-04-04 01:59:45.818415 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-04-04 01:59:45.818429 | orchestrator | 2025-04-04 01:59:45.818444 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-04-04 01:59:45.818457 | orchestrator | 2025-04-04 01:59:45.818472 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-04 01:59:45.818486 | orchestrator | Friday 04 April 2025 01:50:13 +0000 (0:00:00.458) 0:00:01.629 ********** 2025-04-04 01:59:45.818502 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.818519 | orchestrator | 2025-04-04 01:59:45.818534 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-04-04 01:59:45.818550 | orchestrator | Friday 04 April 2025 01:50:15 +0000 (0:00:02.576) 0:00:04.205 ********** 2025-04-04 01:59:45.818587 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.818604 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.818620 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.818635 | orchestrator | 2025-04-04 01:59:45.818651 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-04-04 01:59:45.818668 | orchestrator | Friday 04 April 2025 01:50:18 +0000 (0:00:02.929) 0:00:07.135 ********** 2025-04-04 01:59:45.818683 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.818699 | orchestrator | 2025-04-04 01:59:45.818715 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-04-04 01:59:45.818731 | orchestrator | Friday 04 April 2025 01:50:21 +0000 (0:00:03.016) 0:00:10.151 ********** 2025-04-04 01:59:45.818746 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.818761 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.818777 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.818793 | orchestrator | 2025-04-04 01:59:45.818809 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-04-04 01:59:45.818825 | orchestrator | Friday 04 April 2025 01:50:23 +0000 (0:00:01.864) 0:00:12.016 ********** 2025-04-04 01:59:45.818841 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-04 01:59:45.818857 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-04 01:59:45.818871 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-04 01:59:45.818885 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-04 01:59:45.818899 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-04 01:59:45.818913 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-04 01:59:45.818963 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-04 01:59:45.818979 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-04 01:59:45.818993 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-04 01:59:45.819007 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-04 01:59:45.819021 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-04 01:59:45.819035 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-04 01:59:45.819049 | orchestrator | 2025-04-04 01:59:45.819063 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-04 01:59:45.819077 | orchestrator | Friday 04 April 2025 01:50:28 +0000 (0:00:04.452) 0:00:16.468 ********** 2025-04-04 01:59:45.819091 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-04 01:59:45.819106 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-04 01:59:45.819120 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-04 01:59:45.819134 | orchestrator | 2025-04-04 01:59:45.819148 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-04 01:59:45.819162 | orchestrator | Friday 04 April 2025 01:50:29 +0000 (0:00:01.195) 0:00:17.663 ********** 2025-04-04 01:59:45.819183 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-04 01:59:45.819202 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-04 01:59:45.819217 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-04 01:59:45.819276 | orchestrator | 2025-04-04 01:59:45.819309 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-04 01:59:45.819323 | orchestrator | Friday 04 April 2025 01:50:31 +0000 (0:00:02.544) 0:00:20.207 ********** 2025-04-04 01:59:45.819449 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-04-04 01:59:45.819473 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.819499 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-04-04 01:59:45.819514 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.819529 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-04-04 01:59:45.819543 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.819558 | orchestrator | 2025-04-04 01:59:45.819572 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-04-04 01:59:45.819587 | orchestrator | Friday 04 April 2025 01:50:33 +0000 (0:00:01.495) 0:00:21.703 ********** 2025-04-04 01:59:45.819604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-04 01:59:45.819625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-04 01:59:45.819641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-04 01:59:45.819656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-04 01:59:45.819672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-04 01:59:45.819694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-04 01:59:45.819716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-04 01:59:45.819733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-04 01:59:45.819748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-04 01:59:45.819763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-04 01:59:45.819778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-04 01:59:45.819792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-04 01:59:45.819813 | orchestrator | 2025-04-04 01:59:45.819828 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-04-04 01:59:45.819842 | orchestrator | Friday 04 April 2025 01:50:36 +0000 (0:00:03.592) 0:00:25.296 ********** 2025-04-04 01:59:45.819883 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.819899 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.819914 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.819928 | orchestrator | 2025-04-04 01:59:45.819950 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-04-04 01:59:45.819965 | orchestrator | Friday 04 April 2025 01:50:39 +0000 (0:00:02.839) 0:00:28.136 ********** 2025-04-04 01:59:45.820004 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-04-04 01:59:45.820073 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-04-04 01:59:45.820088 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-04-04 01:59:45.820129 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-04-04 01:59:45.820146 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-04-04 01:59:45.820160 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-04-04 01:59:45.820174 | orchestrator | 2025-04-04 01:59:45.820189 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-04-04 01:59:45.820226 | orchestrator | Friday 04 April 2025 01:50:45 +0000 (0:00:05.493) 0:00:33.629 ********** 2025-04-04 01:59:45.820242 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.820256 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.820360 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.820376 | orchestrator | 2025-04-04 01:59:45.820390 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-04-04 01:59:45.820405 | orchestrator | Friday 04 April 2025 01:50:47 +0000 (0:00:02.152) 0:00:35.782 ********** 2025-04-04 01:59:45.820419 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.820434 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.820448 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.820462 | orchestrator | 2025-04-04 01:59:45.820477 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-04-04 01:59:45.820491 | orchestrator | Friday 04 April 2025 01:50:51 +0000 (0:00:03.682) 0:00:39.464 ********** 2025-04-04 01:59:45.820506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-04 01:59:45.820521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-04 01:59:45.820536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-04 01:59:45.820559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-04 01:59:45.820583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-04 01:59:45.820599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-04 01:59:45.820614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-04 01:59:45.820629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-04 01:59:45.820692 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.820708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-04 01:59:45.820729 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.820744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-04 01:59:45.820759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-04 01:59:45.820781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-04 01:59:45.820823 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.820838 | orchestrator | 2025-04-04 01:59:45.820852 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-04-04 01:59:45.820867 | orchestrator | Friday 04 April 2025 01:50:56 +0000 (0:00:04.989) 0:00:44.454 ********** 2025-04-04 01:59:45.820881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-04 01:59:45.820921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-04 01:59:45.821048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-04 01:59:45.821065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-04 01:59:45.821087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-04 01:59:45.821103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-04 01:59:45.821119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-04 01:59:45.821134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-04 01:59:45.821148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-04 01:59:45.821248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-04 01:59:45.821264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-04 01:59:45.821339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-04 01:59:45.822198 | orchestrator | 2025-04-04 01:59:45.822328 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-04-04 01:59:45.822350 | orchestrator | Friday 04 April 2025 01:51:02 +0000 (0:00:06.123) 0:00:50.577 ********** 2025-04-04 01:59:45.822369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-04 01:59:45.822387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-04 01:59:45.822439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-04 01:59:45.822455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-04 01:59:45.822470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-04 01:59:45.822503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/pro2025-04-04 01:59:45 | INFO  | Task 8a72e806-06b2-4248-8108-4d22850067f9 is in state SUCCESS 2025-04-04 01:59:45.824610 | orchestrator | xysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-04 01:59:45.824651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-04 01:59:45.824664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-04 01:59:45.824686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-04 01:59:45.824711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-04 01:59:45.824722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-04 01:59:45.824733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-04 01:59:45.824744 | orchestrator | 2025-04-04 01:59:45.824755 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-04-04 01:59:45.824872 | orchestrator | Friday 04 April 2025 01:51:06 +0000 (0:00:03.858) 0:00:54.436 ********** 2025-04-04 01:59:45.824892 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-04 01:59:45.824903 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-04 01:59:45.824913 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-04 01:59:45.824924 | orchestrator | 2025-04-04 01:59:45.824934 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-04-04 01:59:45.824945 | orchestrator | Friday 04 April 2025 01:51:12 +0000 (0:00:06.301) 0:01:00.738 ********** 2025-04-04 01:59:45.824955 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-04 01:59:45.824966 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-04 01:59:45.824976 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-04 01:59:45.824993 | orchestrator | 2025-04-04 01:59:45.825003 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-04-04 01:59:45.825014 | orchestrator | Friday 04 April 2025 01:51:16 +0000 (0:00:04.272) 0:01:05.010 ********** 2025-04-04 01:59:45.825024 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.825035 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.825046 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.825056 | orchestrator | 2025-04-04 01:59:45.825066 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-04-04 01:59:45.825077 | orchestrator | Friday 04 April 2025 01:51:18 +0000 (0:00:01.692) 0:01:06.703 ********** 2025-04-04 01:59:45.825087 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-04 01:59:45.825099 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-04 01:59:45.825109 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-04 01:59:45.825120 | orchestrator | 2025-04-04 01:59:45.825130 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-04-04 01:59:45.825140 | orchestrator | Friday 04 April 2025 01:51:22 +0000 (0:00:04.675) 0:01:11.378 ********** 2025-04-04 01:59:45.825151 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-04 01:59:45.825161 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-04 01:59:45.825172 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-04 01:59:45.825182 | orchestrator | 2025-04-04 01:59:45.825192 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-04-04 01:59:45.825203 | orchestrator | Friday 04 April 2025 01:51:26 +0000 (0:00:03.375) 0:01:14.754 ********** 2025-04-04 01:59:45.825215 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-04-04 01:59:45.825232 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-04-04 01:59:45.825244 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-04-04 01:59:45.825255 | orchestrator | 2025-04-04 01:59:45.825267 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-04-04 01:59:45.825278 | orchestrator | Friday 04 April 2025 01:51:28 +0000 (0:00:02.645) 0:01:17.399 ********** 2025-04-04 01:59:45.825308 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-04-04 01:59:45.825319 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-04-04 01:59:45.825330 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-04-04 01:59:45.825341 | orchestrator | 2025-04-04 01:59:45.825356 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-04 01:59:45.825368 | orchestrator | Friday 04 April 2025 01:51:31 +0000 (0:00:02.236) 0:01:19.635 ********** 2025-04-04 01:59:45.825380 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.825391 | orchestrator | 2025-04-04 01:59:45.825402 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-04-04 01:59:45.825423 | orchestrator | Friday 04 April 2025 01:51:32 +0000 (0:00:00.989) 0:01:20.625 ********** 2025-04-04 01:59:45.825436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-04 01:59:45.825487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-04 01:59:45.825501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-04 01:59:45.825514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-04 01:59:45.825526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-04 01:59:45.825567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-04 01:59:45.825579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-04 01:59:45.825678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-04 01:59:45.825692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-04 01:59:45.825702 | orchestrator | 2025-04-04 01:59:45.825713 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-04-04 01:59:45.825723 | orchestrator | Friday 04 April 2025 01:51:35 +0000 (0:00:03.466) 0:01:24.092 ********** 2025-04-04 01:59:45.825733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-04 01:59:45.825744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-04 01:59:45.825754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-04 01:59:45.825765 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.825776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-04 01:59:45.825787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-04 01:59:45.825809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-04 01:59:45.825820 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.825831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-04 01:59:45.825842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-04 01:59:45.825852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-04 01:59:45.825863 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.825873 | orchestrator | 2025-04-04 01:59:45.825884 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-04-04 01:59:45.825894 | orchestrator | Friday 04 April 2025 01:51:36 +0000 (0:00:01.020) 0:01:25.113 ********** 2025-04-04 01:59:45.825909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-04 01:59:45.825934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-04 01:59:45.825950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-04 01:59:45.825961 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.825971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-04 01:59:45.825982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-04 01:59:45.826014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-04 01:59:45.826131 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.826143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-04 01:59:45.826162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-04 01:59:45.826203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-04 01:59:45.826214 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.826225 | orchestrator | 2025-04-04 01:59:45.826258 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-04-04 01:59:45.826275 | orchestrator | Friday 04 April 2025 01:51:40 +0000 (0:00:03.461) 0:01:28.574 ********** 2025-04-04 01:59:45.826302 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-04 01:59:45.826313 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-04 01:59:45.826324 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-04 01:59:45.826334 | orchestrator | 2025-04-04 01:59:45.826345 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-04-04 01:59:45.826355 | orchestrator | Friday 04 April 2025 01:51:42 +0000 (0:00:02.586) 0:01:31.160 ********** 2025-04-04 01:59:45.826366 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-04 01:59:45.826376 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-04 01:59:45.826386 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-04 01:59:45.826396 | orchestrator | 2025-04-04 01:59:45.826406 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-04-04 01:59:45.826417 | orchestrator | Friday 04 April 2025 01:51:46 +0000 (0:00:03.436) 0:01:34.596 ********** 2025-04-04 01:59:45.826427 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-04 01:59:45.826437 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-04 01:59:45.826448 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-04 01:59:45.826458 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-04 01:59:45.826468 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.826479 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-04 01:59:45.826489 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.826499 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-04 01:59:45.826509 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.826520 | orchestrator | 2025-04-04 01:59:45.826530 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-04-04 01:59:45.826540 | orchestrator | Friday 04 April 2025 01:51:48 +0000 (0:00:02.655) 0:01:37.252 ********** 2025-04-04 01:59:45.826558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-04 01:59:45.826568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-04 01:59:45.826620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-04 01:59:45.826659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-04 01:59:45.826700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-04 01:59:45.826713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-04 01:59:45.826730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-04 01:59:45.826741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-04 01:59:45.826752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-04 01:59:45.826806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-04 01:59:45.826831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-04 01:59:45.826843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110', '__omit_place_holder__ff04cd419aa5f542217133a00c3a2d7308582110'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-04 01:59:45.826854 | orchestrator | 2025-04-04 01:59:45.826864 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-04-04 01:59:45.826874 | orchestrator | Friday 04 April 2025 01:51:52 +0000 (0:00:03.753) 0:01:41.006 ********** 2025-04-04 01:59:45.826939 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.826951 | orchestrator | 2025-04-04 01:59:45.826962 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-04-04 01:59:45.826972 | orchestrator | Friday 04 April 2025 01:51:53 +0000 (0:00:01.063) 0:01:42.070 ********** 2025-04-04 01:59:45.826983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-04 01:59:45.826995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.827006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.827028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.827040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-04 01:59:45.827051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.827066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.827077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.827088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-04 01:59:45.827109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.827120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.827131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.827146 | orchestrator | 2025-04-04 01:59:45.827157 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-04-04 01:59:45.827167 | orchestrator | Friday 04 April 2025 01:52:00 +0000 (0:00:06.366) 0:01:48.436 ********** 2025-04-04 01:59:45.827178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-04 01:59:45.827189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-04 01:59:45.827204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.827221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.827232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.827248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.827259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.827270 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.827359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.827373 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.827389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-04 01:59:45.827406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.827441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.827460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.827470 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.827489 | orchestrator | 2025-04-04 01:59:45.827500 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-04-04 01:59:45.827510 | orchestrator | Friday 04 April 2025 01:52:01 +0000 (0:00:01.320) 0:01:49.756 ********** 2025-04-04 01:59:45.827520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-04 01:59:45.827533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-04 01:59:45.827543 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.827554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-04 01:59:45.827565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-04 01:59:45.827575 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.827585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-04 01:59:45.827596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-04 01:59:45.827606 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.827617 | orchestrator | 2025-04-04 01:59:45.827627 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-04-04 01:59:45.827637 | orchestrator | Friday 04 April 2025 01:52:03 +0000 (0:00:01.881) 0:01:51.638 ********** 2025-04-04 01:59:45.827648 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.827658 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.827668 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.827678 | orchestrator | 2025-04-04 01:59:45.827688 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-04-04 01:59:45.827698 | orchestrator | Friday 04 April 2025 01:52:04 +0000 (0:00:01.414) 0:01:53.052 ********** 2025-04-04 01:59:45.827708 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.827719 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.827729 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.827792 | orchestrator | 2025-04-04 01:59:45.827803 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-04-04 01:59:45.827813 | orchestrator | Friday 04 April 2025 01:52:07 +0000 (0:00:02.371) 0:01:55.424 ********** 2025-04-04 01:59:45.827824 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.827869 | orchestrator | 2025-04-04 01:59:45.827879 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-04-04 01:59:45.827892 | orchestrator | Friday 04 April 2025 01:52:07 +0000 (0:00:00.853) 0:01:56.277 ********** 2025-04-04 01:59:45.827908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.827923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.827954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.827965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.827974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.827994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.828004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.828017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.828027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.828035 | orchestrator | 2025-04-04 01:59:45.828044 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-04-04 01:59:45.828053 | orchestrator | Friday 04 April 2025 01:52:15 +0000 (0:00:07.473) 0:02:03.751 ********** 2025-04-04 01:59:45.828062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.828098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.828109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.828118 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.828128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.828137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.828146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.828160 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.828173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.828183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.828192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.828201 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.828210 | orchestrator | 2025-04-04 01:59:45.828219 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-04-04 01:59:45.828228 | orchestrator | Friday 04 April 2025 01:52:16 +0000 (0:00:01.069) 0:02:04.820 ********** 2025-04-04 01:59:45.828236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-04 01:59:45.828245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-04 01:59:45.828255 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.828264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-04 01:59:45.828273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-04 01:59:45.828303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-04 01:59:45.828321 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.828330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-04 01:59:45.828339 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.828347 | orchestrator | 2025-04-04 01:59:45.828356 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-04-04 01:59:45.828366 | orchestrator | Friday 04 April 2025 01:52:17 +0000 (0:00:01.105) 0:02:05.926 ********** 2025-04-04 01:59:45.828375 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.828384 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.828393 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.828402 | orchestrator | 2025-04-04 01:59:45.828410 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-04-04 01:59:45.828419 | orchestrator | Friday 04 April 2025 01:52:18 +0000 (0:00:01.466) 0:02:07.393 ********** 2025-04-04 01:59:45.828428 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.828437 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.828445 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.828454 | orchestrator | 2025-04-04 01:59:45.828463 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-04-04 01:59:45.828472 | orchestrator | Friday 04 April 2025 01:52:21 +0000 (0:00:02.668) 0:02:10.062 ********** 2025-04-04 01:59:45.828480 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.828489 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.828498 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.828506 | orchestrator | 2025-04-04 01:59:45.828519 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-04-04 01:59:45.828528 | orchestrator | Friday 04 April 2025 01:52:22 +0000 (0:00:00.373) 0:02:10.435 ********** 2025-04-04 01:59:45.828537 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.828546 | orchestrator | 2025-04-04 01:59:45.828554 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-04-04 01:59:45.828563 | orchestrator | Friday 04 April 2025 01:52:23 +0000 (0:00:01.217) 0:02:11.652 ********** 2025-04-04 01:59:45.828591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-04 01:59:45.828601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-04 01:59:45.828616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-04 01:59:45.828625 | orchestrator | 2025-04-04 01:59:45.828634 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-04-04 01:59:45.828643 | orchestrator | Friday 04 April 2025 01:52:27 +0000 (0:00:03.829) 0:02:15.482 ********** 2025-04-04 01:59:45.828659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-04 01:59:45.828669 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.828682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-04 01:59:45.828692 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.828701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-04 01:59:45.828710 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.828723 | orchestrator | 2025-04-04 01:59:45.828732 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-04-04 01:59:45.828741 | orchestrator | Friday 04 April 2025 01:52:29 +0000 (0:00:02.187) 0:02:17.669 ********** 2025-04-04 01:59:45.828750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-04 01:59:45.828760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-04 01:59:45.828771 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.828780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-04 01:59:45.828789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-04 01:59:45.828798 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.828807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-04 01:59:45.828824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-04 01:59:45.828833 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.828842 | orchestrator | 2025-04-04 01:59:45.828851 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-04-04 01:59:45.828860 | orchestrator | Friday 04 April 2025 01:52:33 +0000 (0:00:03.835) 0:02:21.505 ********** 2025-04-04 01:59:45.828868 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.828877 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.828886 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.828894 | orchestrator | 2025-04-04 01:59:45.828903 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-04-04 01:59:45.828912 | orchestrator | Friday 04 April 2025 01:52:34 +0000 (0:00:01.410) 0:02:22.915 ********** 2025-04-04 01:59:45.828920 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.828929 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.828938 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.828946 | orchestrator | 2025-04-04 01:59:45.828960 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-04-04 01:59:45.828969 | orchestrator | Friday 04 April 2025 01:52:36 +0000 (0:00:01.815) 0:02:24.731 ********** 2025-04-04 01:59:45.828978 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.828986 | orchestrator | 2025-04-04 01:59:45.828995 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-04-04 01:59:45.829004 | orchestrator | Friday 04 April 2025 01:52:37 +0000 (0:00:01.262) 0:02:25.994 ********** 2025-04-04 01:59:45.829013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.829022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.829081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.829129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829168 | orchestrator | 2025-04-04 01:59:45.829177 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-04-04 01:59:45.829186 | orchestrator | Friday 04 April 2025 01:52:44 +0000 (0:00:07.387) 0:02:33.382 ********** 2025-04-04 01:59:45.829194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.829204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829247 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.829256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.829265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829323 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.829333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.829342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829375 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.829384 | orchestrator | 2025-04-04 01:59:45.829393 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-04-04 01:59:45.829406 | orchestrator | Friday 04 April 2025 01:52:46 +0000 (0:00:01.346) 0:02:34.728 ********** 2025-04-04 01:59:45.829419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-04 01:59:45.829432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-04 01:59:45.829442 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.829451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-04 01:59:45.829460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-04 01:59:45.829469 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.829478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-04 01:59:45.829486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-04 01:59:45.829495 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.829504 | orchestrator | 2025-04-04 01:59:45.829513 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-04-04 01:59:45.829522 | orchestrator | Friday 04 April 2025 01:52:47 +0000 (0:00:01.420) 0:02:36.148 ********** 2025-04-04 01:59:45.829530 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.829539 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.829548 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.829556 | orchestrator | 2025-04-04 01:59:45.829565 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-04-04 01:59:45.829574 | orchestrator | Friday 04 April 2025 01:52:49 +0000 (0:00:01.361) 0:02:37.510 ********** 2025-04-04 01:59:45.829583 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.829591 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.829600 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.829608 | orchestrator | 2025-04-04 01:59:45.829617 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-04-04 01:59:45.829626 | orchestrator | Friday 04 April 2025 01:52:51 +0000 (0:00:02.648) 0:02:40.158 ********** 2025-04-04 01:59:45.829635 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.829643 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.829652 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.829664 | orchestrator | 2025-04-04 01:59:45.829673 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-04-04 01:59:45.829682 | orchestrator | Friday 04 April 2025 01:52:52 +0000 (0:00:00.419) 0:02:40.578 ********** 2025-04-04 01:59:45.829691 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.829700 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.829708 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.829717 | orchestrator | 2025-04-04 01:59:45.829726 | orchestrator | TASK [include_role : designate] ************************************************ 2025-04-04 01:59:45.829734 | orchestrator | Friday 04 April 2025 01:52:52 +0000 (0:00:00.699) 0:02:41.277 ********** 2025-04-04 01:59:45.829743 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.829752 | orchestrator | 2025-04-04 01:59:45.829760 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-04-04 01:59:45.829769 | orchestrator | Friday 04 April 2025 01:52:54 +0000 (0:00:01.752) 0:02:43.029 ********** 2025-04-04 01:59:45.829784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-04 01:59:45.829797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-04 01:59:45.829807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-04 01:59:45.829834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-04 01:59:45.829889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-04 01:59:45.829959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-04 01:59:45.829969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.829997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830050 | orchestrator | 2025-04-04 01:59:45.830064 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-04-04 01:59:45.830073 | orchestrator | Friday 04 April 2025 01:53:03 +0000 (0:00:08.399) 0:02:51.429 ********** 2025-04-04 01:59:45.830083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-04 01:59:45.830099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-04 01:59:45.830109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830163 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.830178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-04 01:59:45.830188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-04 01:59:45.830202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830257 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.830266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-04 01:59:45.830293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-04 01:59:45.830303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.830365 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.830374 | orchestrator | 2025-04-04 01:59:45.830383 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-04-04 01:59:45.830391 | orchestrator | Friday 04 April 2025 01:53:04 +0000 (0:00:01.635) 0:02:53.065 ********** 2025-04-04 01:59:45.830400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-04 01:59:45.830409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-04 01:59:45.830419 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.830428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-04 01:59:45.830437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-04 01:59:45.830445 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.830516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-04 01:59:45.830529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-04 01:59:45.830538 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.830547 | orchestrator | 2025-04-04 01:59:45.830556 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-04-04 01:59:45.830564 | orchestrator | Friday 04 April 2025 01:53:06 +0000 (0:00:02.153) 0:02:55.218 ********** 2025-04-04 01:59:45.830573 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.830582 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.830590 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.830599 | orchestrator | 2025-04-04 01:59:45.830608 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-04-04 01:59:45.830616 | orchestrator | Friday 04 April 2025 01:53:08 +0000 (0:00:01.729) 0:02:56.947 ********** 2025-04-04 01:59:45.830625 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.830633 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.830642 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.830651 | orchestrator | 2025-04-04 01:59:45.830659 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-04-04 01:59:45.830668 | orchestrator | Friday 04 April 2025 01:53:11 +0000 (0:00:02.798) 0:02:59.746 ********** 2025-04-04 01:59:45.830677 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.830686 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.830694 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.830703 | orchestrator | 2025-04-04 01:59:45.830712 | orchestrator | TASK [include_role : glance] *************************************************** 2025-04-04 01:59:45.830725 | orchestrator | Friday 04 April 2025 01:53:12 +0000 (0:00:00.735) 0:03:00.481 ********** 2025-04-04 01:59:45.831921 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.832008 | orchestrator | 2025-04-04 01:59:45.832030 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-04-04 01:59:45.832065 | orchestrator | Friday 04 April 2025 01:53:13 +0000 (0:00:01.479) 0:03:01.960 ********** 2025-04-04 01:59:45.832085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-04 01:59:45.832105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-04 01:59:45.832145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-04 01:59:45.832170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-04 01:59:45.832198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-04 01:59:45.832221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-04 01:59:45.832237 | orchestrator | 2025-04-04 01:59:45.832251 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-04-04 01:59:45.832266 | orchestrator | Friday 04 April 2025 01:53:21 +0000 (0:00:07.805) 0:03:09.765 ********** 2025-04-04 01:59:45.832317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-04 01:59:45.832343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-04 01:59:45.832359 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.832383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-04 01:59:45.832407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-04 01:59:45.832423 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.832438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-04 01:59:45.832468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-04 01:59:45.832484 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.832498 | orchestrator | 2025-04-04 01:59:45.832513 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-04-04 01:59:45.832527 | orchestrator | Friday 04 April 2025 01:53:26 +0000 (0:00:04.825) 0:03:14.591 ********** 2025-04-04 01:59:45.832541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-04 01:59:45.832556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-04 01:59:45.832572 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.832587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-04 01:59:45.832614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-04 01:59:45.832629 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.832644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-04 01:59:45.832659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-04 01:59:45.832674 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.832688 | orchestrator | 2025-04-04 01:59:45.832703 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-04-04 01:59:45.832725 | orchestrator | Friday 04 April 2025 01:53:31 +0000 (0:00:05.525) 0:03:20.117 ********** 2025-04-04 01:59:45.832740 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.832754 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.832768 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.832782 | orchestrator | 2025-04-04 01:59:45.832796 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-04-04 01:59:45.832810 | orchestrator | Friday 04 April 2025 01:53:33 +0000 (0:00:01.753) 0:03:21.870 ********** 2025-04-04 01:59:45.832824 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.832839 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.832853 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.832867 | orchestrator | 2025-04-04 01:59:45.832881 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-04-04 01:59:45.832896 | orchestrator | Friday 04 April 2025 01:53:36 +0000 (0:00:02.665) 0:03:24.535 ********** 2025-04-04 01:59:45.832910 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.832924 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.832939 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.832953 | orchestrator | 2025-04-04 01:59:45.832967 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-04-04 01:59:45.832981 | orchestrator | Friday 04 April 2025 01:53:36 +0000 (0:00:00.719) 0:03:25.255 ********** 2025-04-04 01:59:45.832995 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.833009 | orchestrator | 2025-04-04 01:59:45.833024 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-04-04 01:59:45.833038 | orchestrator | Friday 04 April 2025 01:53:38 +0000 (0:00:01.510) 0:03:26.766 ********** 2025-04-04 01:59:45.833053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-04 01:59:45.833074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-04 01:59:45.833097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-04 01:59:45.833112 | orchestrator | 2025-04-04 01:59:45.833127 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-04-04 01:59:45.833141 | orchestrator | Friday 04 April 2025 01:53:42 +0000 (0:00:04.490) 0:03:31.256 ********** 2025-04-04 01:59:45.833155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-04 01:59:45.833171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-04 01:59:45.833185 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.833199 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.833214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-04 01:59:45.833234 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.833249 | orchestrator | 2025-04-04 01:59:45.833263 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-04-04 01:59:45.833277 | orchestrator | Friday 04 April 2025 01:53:43 +0000 (0:00:00.544) 0:03:31.801 ********** 2025-04-04 01:59:45.833357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-04 01:59:45.833375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-04 01:59:45.833389 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.833402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-04 01:59:45.833415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-04 01:59:45.833427 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.833440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-04 01:59:45.833458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-04 01:59:45.833471 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.833484 | orchestrator | 2025-04-04 01:59:45.833497 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-04-04 01:59:45.833509 | orchestrator | Friday 04 April 2025 01:53:44 +0000 (0:00:01.278) 0:03:33.079 ********** 2025-04-04 01:59:45.833521 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.833534 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.833546 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.833559 | orchestrator | 2025-04-04 01:59:45.833572 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-04-04 01:59:45.833584 | orchestrator | Friday 04 April 2025 01:53:45 +0000 (0:00:01.306) 0:03:34.385 ********** 2025-04-04 01:59:45.833596 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.833609 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.833621 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.833634 | orchestrator | 2025-04-04 01:59:45.833646 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-04-04 01:59:45.833658 | orchestrator | Friday 04 April 2025 01:53:48 +0000 (0:00:02.748) 0:03:37.134 ********** 2025-04-04 01:59:45.833671 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.833683 | orchestrator | 2025-04-04 01:59:45.833696 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-04-04 01:59:45.833708 | orchestrator | Friday 04 April 2025 01:53:50 +0000 (0:00:01.656) 0:03:38.791 ********** 2025-04-04 01:59:45.833721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.833741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.833755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.833775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.833788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.833802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.833820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.833833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.833846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.833859 | orchestrator | 2025-04-04 01:59:45.833877 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-04-04 01:59:45.833890 | orchestrator | Friday 04 April 2025 01:54:01 +0000 (0:00:10.846) 0:03:49.637 ********** 2025-04-04 01:59:45.833903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.833916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.833934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.833947 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.833960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.833978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.833992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.834004 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.834059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.834081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.834095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.834108 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.834120 | orchestrator | 2025-04-04 01:59:45.834133 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-04-04 01:59:45.834146 | orchestrator | Friday 04 April 2025 01:54:02 +0000 (0:00:01.682) 0:03:51.320 ********** 2025-04-04 01:59:45.834159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-04 01:59:45.834173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-04 01:59:45.834195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-04 01:59:45.834222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-04 01:59:45.834237 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.834249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-04 01:59:45.834262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-04 01:59:45.834308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-04 01:59:45.834322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-04 01:59:45.834335 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.834348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-04 01:59:45.834361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-04 01:59:45.834374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-04 01:59:45.834386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-04 01:59:45.834399 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.834415 | orchestrator | 2025-04-04 01:59:45.834428 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-04-04 01:59:45.834441 | orchestrator | Friday 04 April 2025 01:54:04 +0000 (0:00:01.756) 0:03:53.076 ********** 2025-04-04 01:59:45.834453 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.834466 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.834478 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.834490 | orchestrator | 2025-04-04 01:59:45.834503 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-04-04 01:59:45.834515 | orchestrator | Friday 04 April 2025 01:54:06 +0000 (0:00:01.672) 0:03:54.749 ********** 2025-04-04 01:59:45.834527 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.834540 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.834552 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.834564 | orchestrator | 2025-04-04 01:59:45.834577 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-04-04 01:59:45.834589 | orchestrator | Friday 04 April 2025 01:54:09 +0000 (0:00:02.947) 0:03:57.696 ********** 2025-04-04 01:59:45.834605 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.834618 | orchestrator | 2025-04-04 01:59:45.834631 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-04-04 01:59:45.834643 | orchestrator | Friday 04 April 2025 01:54:10 +0000 (0:00:01.335) 0:03:59.031 ********** 2025-04-04 01:59:45.834665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-04 01:59:45.834689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-04 01:59:45.834712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-04 01:59:45.834753 | orchestrator | 2025-04-04 01:59:45.834766 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-04-04 01:59:45.834779 | orchestrator | Friday 04 April 2025 01:54:16 +0000 (0:00:05.961) 0:04:04.992 ********** 2025-04-04 01:59:45.834792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-04 01:59:45.834820 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.834842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-04 01:59:45.834863 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.834876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-04 01:59:45.834895 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.834907 | orchestrator | 2025-04-04 01:59:45.834925 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-04-04 01:59:45.834938 | orchestrator | Friday 04 April 2025 01:54:17 +0000 (0:00:01.109) 0:04:06.102 ********** 2025-04-04 01:59:45.834951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-04 01:59:45.834965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-04 01:59:45.834979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-04 01:59:45.834993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-04 01:59:45.835006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-04 01:59:45.835019 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.835037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-04 01:59:45.835050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-04 01:59:45.835063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-04 01:59:45.835075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-04 01:59:45.835094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-04 01:59:45.835107 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.835119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-04 01:59:45.835132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-04 01:59:45.835150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-04 01:59:45.835163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-04 01:59:45.835176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-04 01:59:45.835188 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.835201 | orchestrator | 2025-04-04 01:59:45.835213 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-04-04 01:59:45.835226 | orchestrator | Friday 04 April 2025 01:54:19 +0000 (0:00:01.620) 0:04:07.722 ********** 2025-04-04 01:59:45.835238 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.835251 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.835263 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.835275 | orchestrator | 2025-04-04 01:59:45.835303 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-04-04 01:59:45.835316 | orchestrator | Friday 04 April 2025 01:54:20 +0000 (0:00:01.576) 0:04:09.299 ********** 2025-04-04 01:59:45.835328 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.835341 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.835353 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.835366 | orchestrator | 2025-04-04 01:59:45.835378 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-04-04 01:59:45.835391 | orchestrator | Friday 04 April 2025 01:54:23 +0000 (0:00:02.690) 0:04:11.989 ********** 2025-04-04 01:59:45.835403 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.835416 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.835428 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.835441 | orchestrator | 2025-04-04 01:59:45.835453 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-04-04 01:59:45.835466 | orchestrator | Friday 04 April 2025 01:54:24 +0000 (0:00:00.625) 0:04:12.614 ********** 2025-04-04 01:59:45.835478 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.835490 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.835503 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.835515 | orchestrator | 2025-04-04 01:59:45.835528 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-04-04 01:59:45.835540 | orchestrator | Friday 04 April 2025 01:54:24 +0000 (0:00:00.351) 0:04:12.965 ********** 2025-04-04 01:59:45.835552 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.835570 | orchestrator | 2025-04-04 01:59:45.835583 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-04-04 01:59:45.835595 | orchestrator | Friday 04 April 2025 01:54:26 +0000 (0:00:01.848) 0:04:14.814 ********** 2025-04-04 01:59:45.835608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-04 01:59:45.835623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-04 01:59:45.835642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-04 01:59:45.835657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-04 01:59:45.835671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-04 01:59:45.835690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-04 01:59:45.835703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-04 01:59:45.835722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-04 01:59:45.835735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-04 01:59:45.835748 | orchestrator | 2025-04-04 01:59:45.835761 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-04-04 01:59:45.835774 | orchestrator | Friday 04 April 2025 01:54:32 +0000 (0:00:06.190) 0:04:21.005 ********** 2025-04-04 01:59:45.835795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-04 01:59:45.835817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-04 01:59:45.835831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-04 01:59:45.835843 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.835863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-04 01:59:45.835876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-04 01:59:45.835889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-04 01:59:45.835907 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.835928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-04 01:59:45.835942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-04 01:59:45.835955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-04 01:59:45.835968 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.835981 | orchestrator | 2025-04-04 01:59:45.835993 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-04-04 01:59:45.836006 | orchestrator | Friday 04 April 2025 01:54:33 +0000 (0:00:01.196) 0:04:22.201 ********** 2025-04-04 01:59:45.836023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-04 01:59:45.836041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-04 01:59:45.836053 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.836066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-04 01:59:45.836079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-04 01:59:45.836097 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.836110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-04 01:59:45.836123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-04 01:59:45.836136 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.836148 | orchestrator | 2025-04-04 01:59:45.836161 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-04-04 01:59:45.836173 | orchestrator | Friday 04 April 2025 01:54:35 +0000 (0:00:01.304) 0:04:23.506 ********** 2025-04-04 01:59:45.836185 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.836198 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.836210 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.836223 | orchestrator | 2025-04-04 01:59:45.836235 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-04-04 01:59:45.836248 | orchestrator | Friday 04 April 2025 01:54:36 +0000 (0:00:01.564) 0:04:25.070 ********** 2025-04-04 01:59:45.836260 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.836272 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.836302 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.836315 | orchestrator | 2025-04-04 01:59:45.836328 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-04-04 01:59:45.836340 | orchestrator | Friday 04 April 2025 01:54:39 +0000 (0:00:02.915) 0:04:27.985 ********** 2025-04-04 01:59:45.836353 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.836365 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.836377 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.836390 | orchestrator | 2025-04-04 01:59:45.836402 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-04-04 01:59:45.836415 | orchestrator | Friday 04 April 2025 01:54:39 +0000 (0:00:00.367) 0:04:28.353 ********** 2025-04-04 01:59:45.836428 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.836440 | orchestrator | 2025-04-04 01:59:45.836457 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-04-04 01:59:45.836469 | orchestrator | Friday 04 April 2025 01:54:41 +0000 (0:00:01.735) 0:04:30.088 ********** 2025-04-04 01:59:45.836483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-04 01:59:45.836511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-04 01:59:45.836530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.836544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-04 01:59:45.836558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.836571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.836583 | orchestrator | 2025-04-04 01:59:45.836596 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-04-04 01:59:45.836609 | orchestrator | Friday 04 April 2025 01:54:47 +0000 (0:00:05.330) 0:04:35.419 ********** 2025-04-04 01:59:45.836641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-04 01:59:45.836656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.836669 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.836682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-04 01:59:45.836695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.836708 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.836734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-04 01:59:45.836753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.836766 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.836779 | orchestrator | 2025-04-04 01:59:45.836791 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-04-04 01:59:45.836804 | orchestrator | Friday 04 April 2025 01:54:48 +0000 (0:00:01.353) 0:04:36.772 ********** 2025-04-04 01:59:45.836817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-04 01:59:45.836829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-04 01:59:45.836846 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.836860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-04 01:59:45.836873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-04 01:59:45.836886 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.836898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-04 01:59:45.836911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-04 01:59:45.836923 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.836936 | orchestrator | 2025-04-04 01:59:45.836948 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-04-04 01:59:45.836961 | orchestrator | Friday 04 April 2025 01:54:49 +0000 (0:00:01.486) 0:04:38.259 ********** 2025-04-04 01:59:45.836973 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.836985 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.836998 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.837010 | orchestrator | 2025-04-04 01:59:45.837023 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-04-04 01:59:45.837035 | orchestrator | Friday 04 April 2025 01:54:51 +0000 (0:00:01.802) 0:04:40.061 ********** 2025-04-04 01:59:45.837048 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.837060 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.837073 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.837086 | orchestrator | 2025-04-04 01:59:45.837098 | orchestrator | TASK [include_role : manila] *************************************************** 2025-04-04 01:59:45.837116 | orchestrator | Friday 04 April 2025 01:54:54 +0000 (0:00:02.941) 0:04:43.003 ********** 2025-04-04 01:59:45.837128 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.837140 | orchestrator | 2025-04-04 01:59:45.837153 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-04-04 01:59:45.837165 | orchestrator | Friday 04 April 2025 01:54:56 +0000 (0:00:01.432) 0:04:44.435 ********** 2025-04-04 01:59:45.837183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-04 01:59:45.837197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-04 01:59:45.837245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-04 01:59:45.837391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837436 | orchestrator | 2025-04-04 01:59:45.837449 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-04-04 01:59:45.837462 | orchestrator | Friday 04 April 2025 01:55:01 +0000 (0:00:05.431) 0:04:49.867 ********** 2025-04-04 01:59:45.837489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-04 01:59:45.837637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837692 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.837721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-04 01:59:45.837735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837865 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.837877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-04 01:59:45.837888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.837934 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.837945 | orchestrator | 2025-04-04 01:59:45.837955 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-04-04 01:59:45.837966 | orchestrator | Friday 04 April 2025 01:55:02 +0000 (0:00:01.127) 0:04:50.995 ********** 2025-04-04 01:59:45.837976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-04 01:59:45.838080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-04 01:59:45.838104 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.838116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-04 01:59:45.838127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-04 01:59:45.838138 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.838149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-04 01:59:45.838160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-04 01:59:45.838171 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.838182 | orchestrator | 2025-04-04 01:59:45.838193 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-04-04 01:59:45.838204 | orchestrator | Friday 04 April 2025 01:55:04 +0000 (0:00:01.633) 0:04:52.628 ********** 2025-04-04 01:59:45.838215 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.838226 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.838236 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.838247 | orchestrator | 2025-04-04 01:59:45.838258 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-04-04 01:59:45.838275 | orchestrator | Friday 04 April 2025 01:55:05 +0000 (0:00:01.592) 0:04:54.221 ********** 2025-04-04 01:59:45.838303 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.838313 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.838324 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.838334 | orchestrator | 2025-04-04 01:59:45.838344 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-04-04 01:59:45.838354 | orchestrator | Friday 04 April 2025 01:55:08 +0000 (0:00:02.683) 0:04:56.904 ********** 2025-04-04 01:59:45.838364 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.838374 | orchestrator | 2025-04-04 01:59:45.838385 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-04-04 01:59:45.838395 | orchestrator | Friday 04 April 2025 01:55:10 +0000 (0:00:01.805) 0:04:58.709 ********** 2025-04-04 01:59:45.838405 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-04 01:59:45.838416 | orchestrator | 2025-04-04 01:59:45.838426 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-04-04 01:59:45.838436 | orchestrator | Friday 04 April 2025 01:55:14 +0000 (0:00:04.320) 0:05:03.030 ********** 2025-04-04 01:59:45.838446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-04 01:59:45.838514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-04 01:59:45.838534 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.838557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-04 01:59:45.838575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-04 01:59:45.838587 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.838652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-04 01:59:45.838688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-04 01:59:45.838700 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.838712 | orchestrator | 2025-04-04 01:59:45.838723 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-04-04 01:59:45.838734 | orchestrator | Friday 04 April 2025 01:55:18 +0000 (0:00:03.909) 0:05:06.939 ********** 2025-04-04 01:59:45.838746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-04 01:59:45.838820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-04 01:59:45.838837 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.838854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-04 01:59:45.838874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-04 01:59:45.838886 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.838957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-04 01:59:45.838984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-04 01:59:45.838996 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.839007 | orchestrator | 2025-04-04 01:59:45.839018 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-04-04 01:59:45.839030 | orchestrator | Friday 04 April 2025 01:55:22 +0000 (0:00:03.838) 0:05:10.778 ********** 2025-04-04 01:59:45.839041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-04 01:59:45.839053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-04 01:59:45.839065 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.839076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-04 01:59:45.839088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-04 01:59:45.839099 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.839160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-04 01:59:45.839186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-04 01:59:45.839198 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.839209 | orchestrator | 2025-04-04 01:59:45.839221 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-04-04 01:59:45.839232 | orchestrator | Friday 04 April 2025 01:55:26 +0000 (0:00:04.021) 0:05:14.800 ********** 2025-04-04 01:59:45.839243 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.839254 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.839265 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.839276 | orchestrator | 2025-04-04 01:59:45.839302 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-04-04 01:59:45.839313 | orchestrator | Friday 04 April 2025 01:55:29 +0000 (0:00:02.991) 0:05:17.791 ********** 2025-04-04 01:59:45.839323 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.839333 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.839344 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.839354 | orchestrator | 2025-04-04 01:59:45.839364 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-04-04 01:59:45.839374 | orchestrator | Friday 04 April 2025 01:55:31 +0000 (0:00:02.605) 0:05:20.396 ********** 2025-04-04 01:59:45.839384 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.839395 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.839405 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.839416 | orchestrator | 2025-04-04 01:59:45.839426 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-04-04 01:59:45.839436 | orchestrator | Friday 04 April 2025 01:55:32 +0000 (0:00:00.398) 0:05:20.795 ********** 2025-04-04 01:59:45.839446 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.839456 | orchestrator | 2025-04-04 01:59:45.839467 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-04-04 01:59:45.839477 | orchestrator | Friday 04 April 2025 01:55:34 +0000 (0:00:01.817) 0:05:22.612 ********** 2025-04-04 01:59:45.839487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-04 01:59:45.839507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-04 01:59:45.839582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-04 01:59:45.839602 | orchestrator | 2025-04-04 01:59:45.839613 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-04-04 01:59:45.839623 | orchestrator | Friday 04 April 2025 01:55:36 +0000 (0:00:02.108) 0:05:24.720 ********** 2025-04-04 01:59:45.839634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-04 01:59:45.839645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-04 01:59:45.839656 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.839666 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.839677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-04 01:59:45.839688 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.839698 | orchestrator | 2025-04-04 01:59:45.839708 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-04-04 01:59:45.839724 | orchestrator | Friday 04 April 2025 01:55:37 +0000 (0:00:00.717) 0:05:25.438 ********** 2025-04-04 01:59:45.839735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-04 01:59:45.839746 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.839757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-04 01:59:45.839767 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.839778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-04 01:59:45.839788 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.839798 | orchestrator | 2025-04-04 01:59:45.839860 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-04-04 01:59:45.839879 | orchestrator | Friday 04 April 2025 01:55:38 +0000 (0:00:01.063) 0:05:26.502 ********** 2025-04-04 01:59:45.839889 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.839900 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.839910 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.839920 | orchestrator | 2025-04-04 01:59:45.839931 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-04-04 01:59:45.839941 | orchestrator | Friday 04 April 2025 01:55:39 +0000 (0:00:01.023) 0:05:27.526 ********** 2025-04-04 01:59:45.839951 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.839961 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.839972 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.839982 | orchestrator | 2025-04-04 01:59:45.839992 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-04-04 01:59:45.840002 | orchestrator | Friday 04 April 2025 01:55:41 +0000 (0:00:02.225) 0:05:29.751 ********** 2025-04-04 01:59:45.840012 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.840023 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.840033 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.840043 | orchestrator | 2025-04-04 01:59:45.840053 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-04-04 01:59:45.840063 | orchestrator | Friday 04 April 2025 01:55:41 +0000 (0:00:00.352) 0:05:30.104 ********** 2025-04-04 01:59:45.840074 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.840084 | orchestrator | 2025-04-04 01:59:45.840094 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-04-04 01:59:45.840104 | orchestrator | Friday 04 April 2025 01:55:43 +0000 (0:00:01.954) 0:05:32.058 ********** 2025-04-04 01:59:45.840123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-04 01:59:45.840141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-04 01:59:45.840246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.840364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.840380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-04 01:59:45.840469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.840497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.840516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-04 01:59:45.840549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-04 01:59:45.840613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-04 01:59:45.840664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-04 01:59:45.840770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.840795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-04 01:59:45.840805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.840823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-04 01:59:45.840892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.840932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.840941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.840993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-04 01:59:45.841031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-04 01:59:45.841049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-04 01:59:45.841058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.841142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.841151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-04 01:59:45.841177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.841241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.841259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-04 01:59:45.841300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-04 01:59:45.841310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841319 | orchestrator | 2025-04-04 01:59:45.841327 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-04-04 01:59:45.841336 | orchestrator | Friday 04 April 2025 01:55:53 +0000 (0:00:09.873) 0:05:41.932 ********** 2025-04-04 01:59:45.841398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-04 01:59:45.841421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-04 01:59:45.841512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.841543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.841552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-04 01:59:45.841578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-04 01:59:45.841656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-04 01:59:45.841683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.841774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.841784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.841793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.841819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-04 01:59:45.841873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-04 01:59:45.841904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-04 01:59:45.841914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.841932 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.841948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.841958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.842038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.842055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-04 01:59:45.842073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-04 01:59:45.842082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.842091 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.842100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-04 01:59:45.842162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.842180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.842189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.842198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-04 01:59:45.842215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.842229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.842297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.842316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.842326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-04 01:59:45.842343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.842353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.842362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-04 01:59:45.842376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.842431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-04 01:59:45.842452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-04 01:59:45.842466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.842476 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.842485 | orchestrator | 2025-04-04 01:59:45.842493 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-04-04 01:59:45.842502 | orchestrator | Friday 04 April 2025 01:55:57 +0000 (0:00:04.103) 0:05:46.036 ********** 2025-04-04 01:59:45.842511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-04 01:59:45.842524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-04 01:59:45.842537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-04 01:59:45.842546 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.842555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-04 01:59:45.842564 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.842573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-04 01:59:45.842582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-04 01:59:45.842590 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.842605 | orchestrator | 2025-04-04 01:59:45.842614 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-04-04 01:59:45.842625 | orchestrator | Friday 04 April 2025 01:56:00 +0000 (0:00:02.890) 0:05:48.926 ********** 2025-04-04 01:59:45.842634 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.842643 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.842672 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.842682 | orchestrator | 2025-04-04 01:59:45.842691 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-04-04 01:59:45.842700 | orchestrator | Friday 04 April 2025 01:56:02 +0000 (0:00:01.776) 0:05:50.703 ********** 2025-04-04 01:59:45.842709 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.842717 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.842726 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.842734 | orchestrator | 2025-04-04 01:59:45.842743 | orchestrator | TASK [include_role : placement] ************************************************ 2025-04-04 01:59:45.842752 | orchestrator | Friday 04 April 2025 01:56:05 +0000 (0:00:02.997) 0:05:53.700 ********** 2025-04-04 01:59:45.842760 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.842769 | orchestrator | 2025-04-04 01:59:45.842777 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-04-04 01:59:45.842786 | orchestrator | Friday 04 April 2025 01:56:07 +0000 (0:00:02.021) 0:05:55.722 ********** 2025-04-04 01:59:45.842795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.842804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.842818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.842827 | orchestrator | 2025-04-04 01:59:45.842836 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-04-04 01:59:45.842844 | orchestrator | Friday 04 April 2025 01:56:13 +0000 (0:00:05.934) 0:06:01.656 ********** 2025-04-04 01:59:45.842878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.842889 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.842898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.842907 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.842916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.842929 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.842938 | orchestrator | 2025-04-04 01:59:45.842946 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-04-04 01:59:45.842955 | orchestrator | Friday 04 April 2025 01:56:13 +0000 (0:00:00.617) 0:06:02.274 ********** 2025-04-04 01:59:45.842964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-04 01:59:45.842973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-04 01:59:45.842982 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.842991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-04 01:59:45.842999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-04 01:59:45.843008 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.843017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-04 01:59:45.843026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-04 01:59:45.843034 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.843043 | orchestrator | 2025-04-04 01:59:45.843052 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-04-04 01:59:45.843079 | orchestrator | Friday 04 April 2025 01:56:15 +0000 (0:00:01.622) 0:06:03.896 ********** 2025-04-04 01:59:45.843090 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.843099 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.843110 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.843119 | orchestrator | 2025-04-04 01:59:45.843129 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-04-04 01:59:45.843139 | orchestrator | Friday 04 April 2025 01:56:16 +0000 (0:00:01.466) 0:06:05.363 ********** 2025-04-04 01:59:45.843148 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.843158 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.843168 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.843177 | orchestrator | 2025-04-04 01:59:45.843187 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-04-04 01:59:45.843197 | orchestrator | Friday 04 April 2025 01:56:19 +0000 (0:00:02.521) 0:06:07.885 ********** 2025-04-04 01:59:45.843206 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.843216 | orchestrator | 2025-04-04 01:59:45.843225 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-04-04 01:59:45.843236 | orchestrator | Friday 04 April 2025 01:56:21 +0000 (0:00:01.940) 0:06:09.825 ********** 2025-04-04 01:59:45.843250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.843268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.843278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.843325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.843337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.843351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.843368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.843378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.843408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.843419 | orchestrator | 2025-04-04 01:59:45.843429 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-04-04 01:59:45.843438 | orchestrator | Friday 04 April 2025 01:56:28 +0000 (0:00:07.010) 0:06:16.836 ********** 2025-04-04 01:59:45.843447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.843466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.843476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.843485 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.843494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.843521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.843536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.843545 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.843560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.843570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.843579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.843588 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.843597 | orchestrator | 2025-04-04 01:59:45.843605 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-04-04 01:59:45.843614 | orchestrator | Friday 04 April 2025 01:56:29 +0000 (0:00:01.529) 0:06:18.366 ********** 2025-04-04 01:59:45.843624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-04 01:59:45.843652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-04 01:59:45.843666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-04 01:59:45.843675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-04 01:59:45.843684 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.843693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-04 01:59:45.843702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-04 01:59:45.843711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-04 01:59:45.843720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-04 01:59:45.843729 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.843737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-04 01:59:45.843746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-04 01:59:45.843755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-04 01:59:45.843764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-04 01:59:45.843772 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.843781 | orchestrator | 2025-04-04 01:59:45.843790 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-04-04 01:59:45.843798 | orchestrator | Friday 04 April 2025 01:56:31 +0000 (0:00:01.786) 0:06:20.153 ********** 2025-04-04 01:59:45.843807 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.843816 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.843824 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.843833 | orchestrator | 2025-04-04 01:59:45.843842 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-04-04 01:59:45.843850 | orchestrator | Friday 04 April 2025 01:56:33 +0000 (0:00:01.848) 0:06:22.001 ********** 2025-04-04 01:59:45.843859 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.843868 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.843876 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.843885 | orchestrator | 2025-04-04 01:59:45.843894 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-04-04 01:59:45.843902 | orchestrator | Friday 04 April 2025 01:56:36 +0000 (0:00:03.105) 0:06:25.106 ********** 2025-04-04 01:59:45.843911 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.843923 | orchestrator | 2025-04-04 01:59:45.843932 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-04-04 01:59:45.843941 | orchestrator | Friday 04 April 2025 01:56:38 +0000 (0:00:02.084) 0:06:27.191 ********** 2025-04-04 01:59:45.843950 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-04-04 01:59:45.843959 | orchestrator | 2025-04-04 01:59:45.843973 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-04-04 01:59:45.843982 | orchestrator | Friday 04 April 2025 01:56:40 +0000 (0:00:01.602) 0:06:28.793 ********** 2025-04-04 01:59:45.844008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-04 01:59:45.844019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-04 01:59:45.844028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-04 01:59:45.844037 | orchestrator | 2025-04-04 01:59:45.844046 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-04-04 01:59:45.844054 | orchestrator | Friday 04 April 2025 01:56:46 +0000 (0:00:06.047) 0:06:34.841 ********** 2025-04-04 01:59:45.844063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-04 01:59:45.844072 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.844088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-04 01:59:45.844097 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.844106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-04 01:59:45.844119 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.844128 | orchestrator | 2025-04-04 01:59:45.844136 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-04-04 01:59:45.844145 | orchestrator | Friday 04 April 2025 01:56:48 +0000 (0:00:02.530) 0:06:37.371 ********** 2025-04-04 01:59:45.844154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-04 01:59:45.844163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-04 01:59:45.844172 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.844180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-04 01:59:45.844211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-04 01:59:45.844221 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.844230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-04 01:59:45.844239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-04 01:59:45.844248 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.844256 | orchestrator | 2025-04-04 01:59:45.844265 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-04 01:59:45.844274 | orchestrator | Friday 04 April 2025 01:56:51 +0000 (0:00:02.340) 0:06:39.712 ********** 2025-04-04 01:59:45.844298 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.844307 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.844316 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.844325 | orchestrator | 2025-04-04 01:59:45.844334 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-04 01:59:45.844342 | orchestrator | Friday 04 April 2025 01:56:54 +0000 (0:00:03.518) 0:06:43.230 ********** 2025-04-04 01:59:45.844351 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.844360 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.844368 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.844377 | orchestrator | 2025-04-04 01:59:45.844386 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-04-04 01:59:45.844395 | orchestrator | Friday 04 April 2025 01:56:59 +0000 (0:00:04.343) 0:06:47.574 ********** 2025-04-04 01:59:45.844407 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-04-04 01:59:45.844416 | orchestrator | 2025-04-04 01:59:45.844425 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-04-04 01:59:45.844434 | orchestrator | Friday 04 April 2025 01:57:00 +0000 (0:00:01.687) 0:06:49.261 ********** 2025-04-04 01:59:45.844443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-04 01:59:45.844456 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.844465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-04 01:59:45.844475 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.844483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-04 01:59:45.844492 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.844501 | orchestrator | 2025-04-04 01:59:45.844510 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-04-04 01:59:45.844519 | orchestrator | Friday 04 April 2025 01:57:02 +0000 (0:00:01.874) 0:06:51.135 ********** 2025-04-04 01:59:45.844549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-04 01:59:45.844559 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.844575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-04 01:59:45.844649 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.844659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-04 01:59:45.844668 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.844677 | orchestrator | 2025-04-04 01:59:45.844686 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-04-04 01:59:45.844700 | orchestrator | Friday 04 April 2025 01:57:05 +0000 (0:00:02.303) 0:06:53.438 ********** 2025-04-04 01:59:45.844708 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.844717 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.844725 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.844737 | orchestrator | 2025-04-04 01:59:45.844746 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-04 01:59:45.844755 | orchestrator | Friday 04 April 2025 01:57:07 +0000 (0:00:02.784) 0:06:56.222 ********** 2025-04-04 01:59:45.844763 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.844772 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.844781 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.844789 | orchestrator | 2025-04-04 01:59:45.844798 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-04 01:59:45.844807 | orchestrator | Friday 04 April 2025 01:57:11 +0000 (0:00:04.021) 0:07:00.244 ********** 2025-04-04 01:59:45.844815 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.844824 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.844833 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.844841 | orchestrator | 2025-04-04 01:59:45.844850 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-04-04 01:59:45.844859 | orchestrator | Friday 04 April 2025 01:57:16 +0000 (0:00:04.219) 0:07:04.463 ********** 2025-04-04 01:59:45.844868 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-04-04 01:59:45.844876 | orchestrator | 2025-04-04 01:59:45.844885 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-04-04 01:59:45.844894 | orchestrator | Friday 04 April 2025 01:57:17 +0000 (0:00:01.916) 0:07:06.380 ********** 2025-04-04 01:59:45.844902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-04 01:59:45.844911 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.844920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-04 01:59:45.844929 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.844960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-04 01:59:45.844970 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.844979 | orchestrator | 2025-04-04 01:59:45.844987 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-04-04 01:59:45.844996 | orchestrator | Friday 04 April 2025 01:57:20 +0000 (0:00:02.449) 0:07:08.830 ********** 2025-04-04 01:59:45.845005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-04 01:59:45.845018 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.845027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-04 01:59:45.845036 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.845045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-04 01:59:45.845054 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.845063 | orchestrator | 2025-04-04 01:59:45.845071 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-04-04 01:59:45.845080 | orchestrator | Friday 04 April 2025 01:57:22 +0000 (0:00:01.873) 0:07:10.704 ********** 2025-04-04 01:59:45.845088 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.845097 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.845105 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.845114 | orchestrator | 2025-04-04 01:59:45.845123 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-04 01:59:45.845131 | orchestrator | Friday 04 April 2025 01:57:24 +0000 (0:00:02.687) 0:07:13.391 ********** 2025-04-04 01:59:45.845140 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.845148 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.845157 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.845165 | orchestrator | 2025-04-04 01:59:45.845174 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-04 01:59:45.845183 | orchestrator | Friday 04 April 2025 01:57:28 +0000 (0:00:03.386) 0:07:16.777 ********** 2025-04-04 01:59:45.845191 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.845200 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.845208 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.845217 | orchestrator | 2025-04-04 01:59:45.845225 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-04-04 01:59:45.845238 | orchestrator | Friday 04 April 2025 01:57:32 +0000 (0:00:04.380) 0:07:21.158 ********** 2025-04-04 01:59:45.845246 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.845255 | orchestrator | 2025-04-04 01:59:45.845264 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-04-04 01:59:45.845272 | orchestrator | Friday 04 April 2025 01:57:34 +0000 (0:00:02.164) 0:07:23.323 ********** 2025-04-04 01:59:45.845345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.845363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-04 01:59:45.845372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.845382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.845391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.845400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.845413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-04 01:59:45.845443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.845454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.845463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.845472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.845481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-04 01:59:45.845490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.845524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.845535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.845544 | orchestrator | 2025-04-04 01:59:45.845553 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-04-04 01:59:45.845562 | orchestrator | Friday 04 April 2025 01:57:41 +0000 (0:00:06.238) 0:07:29.562 ********** 2025-04-04 01:59:45.845571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.845580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-04 01:59:45.845589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.845604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.845632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.845641 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.845649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.845658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-04 01:59:45.845666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.845674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.845687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.845696 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.845724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.845733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-04 01:59:45.845741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.845750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-04 01:59:45.845758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-04 01:59:45.845773 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.845781 | orchestrator | 2025-04-04 01:59:45.845789 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-04-04 01:59:45.845797 | orchestrator | Friday 04 April 2025 01:57:42 +0000 (0:00:01.475) 0:07:31.037 ********** 2025-04-04 01:59:45.845805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-04 01:59:45.845814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-04 01:59:45.845822 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.845830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-04 01:59:45.845839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-04 01:59:45.845847 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.845873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-04 01:59:45.845883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-04 01:59:45.845891 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.845900 | orchestrator | 2025-04-04 01:59:45.845908 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-04-04 01:59:45.845916 | orchestrator | Friday 04 April 2025 01:57:44 +0000 (0:00:01.765) 0:07:32.803 ********** 2025-04-04 01:59:45.845924 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.845932 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.845940 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.845948 | orchestrator | 2025-04-04 01:59:45.845956 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-04-04 01:59:45.845964 | orchestrator | Friday 04 April 2025 01:57:46 +0000 (0:00:01.846) 0:07:34.649 ********** 2025-04-04 01:59:45.845972 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.845980 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.845988 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.845996 | orchestrator | 2025-04-04 01:59:45.846003 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-04-04 01:59:45.846011 | orchestrator | Friday 04 April 2025 01:57:49 +0000 (0:00:03.668) 0:07:38.318 ********** 2025-04-04 01:59:45.846060 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.846068 | orchestrator | 2025-04-04 01:59:45.846076 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-04-04 01:59:45.846084 | orchestrator | Friday 04 April 2025 01:57:52 +0000 (0:00:02.304) 0:07:40.622 ********** 2025-04-04 01:59:45.846093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-04 01:59:45.846107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-04 01:59:45.846116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-04 01:59:45.846146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-04 01:59:45.846157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-04 01:59:45.846171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-04 01:59:45.846179 | orchestrator | 2025-04-04 01:59:45.846188 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-04-04 01:59:45.846196 | orchestrator | Friday 04 April 2025 01:58:00 +0000 (0:00:08.691) 0:07:49.314 ********** 2025-04-04 01:59:45.846223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-04 01:59:45.846233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-04 01:59:45.846241 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.846250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-04 01:59:45.846263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-04 01:59:45.846271 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.846293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-04 01:59:45.846322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-04 01:59:45.846332 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.846340 | orchestrator | 2025-04-04 01:59:45.846348 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-04-04 01:59:45.846368 | orchestrator | Friday 04 April 2025 01:58:02 +0000 (0:00:01.262) 0:07:50.576 ********** 2025-04-04 01:59:45.846376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-04 01:59:45.846385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-04 01:59:45.846393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-04 01:59:45.846402 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.846410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-04 01:59:45.846418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-04 01:59:45.846426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-04 01:59:45.846434 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.846446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-04 01:59:45.846455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-04 01:59:45.846463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-04 01:59:45.846471 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.846479 | orchestrator | 2025-04-04 01:59:45.846487 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-04-04 01:59:45.846498 | orchestrator | Friday 04 April 2025 01:58:04 +0000 (0:00:01.857) 0:07:52.434 ********** 2025-04-04 01:59:45.846507 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.846515 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.846523 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.846531 | orchestrator | 2025-04-04 01:59:45.846539 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-04-04 01:59:45.846547 | orchestrator | Friday 04 April 2025 01:58:04 +0000 (0:00:00.571) 0:07:53.005 ********** 2025-04-04 01:59:45.846555 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.846563 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.846571 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.846579 | orchestrator | 2025-04-04 01:59:45.846587 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-04-04 01:59:45.846595 | orchestrator | Friday 04 April 2025 01:58:07 +0000 (0:00:02.416) 0:07:55.421 ********** 2025-04-04 01:59:45.846622 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.846631 | orchestrator | 2025-04-04 01:59:45.846639 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-04-04 01:59:45.846647 | orchestrator | Friday 04 April 2025 01:58:09 +0000 (0:00:02.405) 0:07:57.827 ********** 2025-04-04 01:59:45.846660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-04 01:59:45.846669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-04 01:59:45.846677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.846686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.846694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-04 01:59:45.846703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-04 01:59:45.846730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-04 01:59:45.846744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.846753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.846761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-04 01:59:45.846769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-04 01:59:45.846778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-04 01:59:45.846786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.846813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.846826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-04 01:59:45.846835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-04 01:59:45.846844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-04 01:59:45.846853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.846861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.846893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-04 01:59:45.846903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.846912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-04 01:59:45.846921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-04 01:59:45.846930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.846938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.846951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-04 01:59:45.846963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.846972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-04 01:59:45.846981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-04 01:59:45.846990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.846998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-04 01:59:45.847034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847043 | orchestrator | 2025-04-04 01:59:45.847051 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-04-04 01:59:45.847059 | orchestrator | Friday 04 April 2025 01:58:16 +0000 (0:00:06.942) 0:08:04.770 ********** 2025-04-04 01:59:45.847071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-04 01:59:45.847080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-04 01:59:45.847088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-04 01:59:45.847130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-04 01:59:45.847139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-04 01:59:45.847148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-04 01:59:45.847183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847191 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.847203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-04 01:59:45.847212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-04 01:59:45.847221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-04 01:59:45.847229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-04 01:59:45.847238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-04 01:59:45.847311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-04 01:59:45.847326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-04 01:59:45.847335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-04 01:59:45.847347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-04 01:59:45.847360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-04 01:59:45.847369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-04 01:59:45.847413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847429 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.847442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-04 01:59:45.847450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-04 01:59:45.847459 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.847467 | orchestrator | 2025-04-04 01:59:45.847475 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-04-04 01:59:45.847487 | orchestrator | Friday 04 April 2025 01:58:18 +0000 (0:00:02.146) 0:08:06.917 ********** 2025-04-04 01:59:45.847496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-04 01:59:45.847504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-04 01:59:45.847513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-04 01:59:45.847522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-04 01:59:45.847538 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.847547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-04 01:59:45.847555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-04 01:59:45.847563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-04 01:59:45.847572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-04 01:59:45.847580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-04 01:59:45.847588 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.847600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-04 01:59:45.847609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-04 01:59:45.847620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-04 01:59:45.847629 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.847640 | orchestrator | 2025-04-04 01:59:45.847649 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-04-04 01:59:45.847657 | orchestrator | Friday 04 April 2025 01:58:20 +0000 (0:00:02.306) 0:08:09.223 ********** 2025-04-04 01:59:45.847665 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.847673 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.847681 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.847690 | orchestrator | 2025-04-04 01:59:45.847698 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-04-04 01:59:45.847706 | orchestrator | Friday 04 April 2025 01:58:21 +0000 (0:00:00.984) 0:08:10.208 ********** 2025-04-04 01:59:45.847714 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.847722 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.847730 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.847738 | orchestrator | 2025-04-04 01:59:45.847746 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-04-04 01:59:45.847755 | orchestrator | Friday 04 April 2025 01:58:24 +0000 (0:00:02.701) 0:08:12.909 ********** 2025-04-04 01:59:45.847763 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.847771 | orchestrator | 2025-04-04 01:59:45.847779 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-04-04 01:59:45.847791 | orchestrator | Friday 04 April 2025 01:58:26 +0000 (0:00:02.415) 0:08:15.324 ********** 2025-04-04 01:59:45.847808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-04 01:59:45.847817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-04 01:59:45.847830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-04 01:59:45.847844 | orchestrator | 2025-04-04 01:59:45.847853 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-04-04 01:59:45.847861 | orchestrator | Friday 04 April 2025 01:58:30 +0000 (0:00:03.809) 0:08:19.134 ********** 2025-04-04 01:59:45.847869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-04 01:59:45.847882 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.847891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-04 01:59:45.847899 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.847908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-04 01:59:45.847916 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.847924 | orchestrator | 2025-04-04 01:59:45.847932 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-04-04 01:59:45.847941 | orchestrator | Friday 04 April 2025 01:58:31 +0000 (0:00:00.894) 0:08:20.029 ********** 2025-04-04 01:59:45.847949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-04 01:59:45.847957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-04 01:59:45.847966 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.847974 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.847985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-04 01:59:45.847993 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.848002 | orchestrator | 2025-04-04 01:59:45.848010 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-04-04 01:59:45.848018 | orchestrator | Friday 04 April 2025 01:58:33 +0000 (0:00:01.543) 0:08:21.572 ********** 2025-04-04 01:59:45.848026 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.848034 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.848046 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.848055 | orchestrator | 2025-04-04 01:59:45.848063 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-04-04 01:59:45.848071 | orchestrator | Friday 04 April 2025 01:58:33 +0000 (0:00:00.559) 0:08:22.132 ********** 2025-04-04 01:59:45.848079 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.848087 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.848096 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.848104 | orchestrator | 2025-04-04 01:59:45.848112 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-04-04 01:59:45.848120 | orchestrator | Friday 04 April 2025 01:58:36 +0000 (0:00:02.449) 0:08:24.582 ********** 2025-04-04 01:59:45.848128 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-04 01:59:45.848136 | orchestrator | 2025-04-04 01:59:45.848144 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-04-04 01:59:45.848152 | orchestrator | Friday 04 April 2025 01:58:38 +0000 (0:00:02.414) 0:08:26.996 ********** 2025-04-04 01:59:45.848161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.848175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.848184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.848196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.848210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.848224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-04 01:59:45.848232 | orchestrator | 2025-04-04 01:59:45.848241 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-04-04 01:59:45.848249 | orchestrator | Friday 04 April 2025 01:58:50 +0000 (0:00:11.485) 0:08:38.482 ********** 2025-04-04 01:59:45.848257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.848273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.848295 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.848304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.848318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.848327 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.848335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.848349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-04 01:59:45.848362 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.848371 | orchestrator | 2025-04-04 01:59:45.848379 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-04-04 01:59:45.848387 | orchestrator | Friday 04 April 2025 01:58:51 +0000 (0:00:01.805) 0:08:40.287 ********** 2025-04-04 01:59:45.848395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-04 01:59:45.848403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-04 01:59:45.848412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-04 01:59:45.848420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-04 01:59:45.848428 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.848437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-04 01:59:45.848445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-04 01:59:45.848453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-04 01:59:45.848461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-04 01:59:45.848469 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.848478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-04 01:59:45.848486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-04 01:59:45.848494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-04 01:59:45.848507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-04 01:59:45.848515 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.848523 | orchestrator | 2025-04-04 01:59:45.848531 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-04-04 01:59:45.848539 | orchestrator | Friday 04 April 2025 01:58:53 +0000 (0:00:01.881) 0:08:42.169 ********** 2025-04-04 01:59:45.848548 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.848556 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.848564 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.848572 | orchestrator | 2025-04-04 01:59:45.848580 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-04-04 01:59:45.848588 | orchestrator | Friday 04 April 2025 01:58:55 +0000 (0:00:01.670) 0:08:43.840 ********** 2025-04-04 01:59:45.848596 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.848604 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.848612 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.848620 | orchestrator | 2025-04-04 01:59:45.848628 | orchestrator | TASK [include_role : swift] **************************************************** 2025-04-04 01:59:45.848640 | orchestrator | Friday 04 April 2025 01:58:58 +0000 (0:00:03.108) 0:08:46.949 ********** 2025-04-04 01:59:45.848648 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.848656 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.848667 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.848675 | orchestrator | 2025-04-04 01:59:45.848683 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-04-04 01:59:45.848692 | orchestrator | Friday 04 April 2025 01:58:58 +0000 (0:00:00.388) 0:08:47.338 ********** 2025-04-04 01:59:45.848700 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.848708 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.848716 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.848724 | orchestrator | 2025-04-04 01:59:45.848732 | orchestrator | TASK [include_role : trove] **************************************************** 2025-04-04 01:59:45.848740 | orchestrator | Friday 04 April 2025 01:58:59 +0000 (0:00:00.721) 0:08:48.060 ********** 2025-04-04 01:59:45.848748 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.848756 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.848765 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.848773 | orchestrator | 2025-04-04 01:59:45.848781 | orchestrator | TASK [include_role : venus] **************************************************** 2025-04-04 01:59:45.848789 | orchestrator | Friday 04 April 2025 01:59:00 +0000 (0:00:00.750) 0:08:48.810 ********** 2025-04-04 01:59:45.848797 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.848805 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.848813 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.848821 | orchestrator | 2025-04-04 01:59:45.848829 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-04-04 01:59:45.848837 | orchestrator | Friday 04 April 2025 01:59:01 +0000 (0:00:00.808) 0:08:49.619 ********** 2025-04-04 01:59:45.848846 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.848854 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.848862 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.848870 | orchestrator | 2025-04-04 01:59:45.848878 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-04-04 01:59:45.848886 | orchestrator | Friday 04 April 2025 01:59:01 +0000 (0:00:00.355) 0:08:49.974 ********** 2025-04-04 01:59:45.848894 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.848902 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.848910 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.848919 | orchestrator | 2025-04-04 01:59:45.848927 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-04-04 01:59:45.848940 | orchestrator | Friday 04 April 2025 01:59:02 +0000 (0:00:01.287) 0:08:51.262 ********** 2025-04-04 01:59:45.848948 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.848956 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.848964 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.848973 | orchestrator | 2025-04-04 01:59:45.848981 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-04-04 01:59:45.848989 | orchestrator | Friday 04 April 2025 01:59:04 +0000 (0:00:01.225) 0:08:52.488 ********** 2025-04-04 01:59:45.848997 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.849006 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.849015 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.849023 | orchestrator | 2025-04-04 01:59:45.849031 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-04-04 01:59:45.849043 | orchestrator | Friday 04 April 2025 01:59:04 +0000 (0:00:00.381) 0:08:52.870 ********** 2025-04-04 01:59:45.849051 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.849059 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.849067 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.849075 | orchestrator | 2025-04-04 01:59:45.849083 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-04-04 01:59:45.849091 | orchestrator | Friday 04 April 2025 01:59:05 +0000 (0:00:01.405) 0:08:54.276 ********** 2025-04-04 01:59:45.849099 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.849107 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.849116 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.849124 | orchestrator | 2025-04-04 01:59:45.849132 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-04-04 01:59:45.849140 | orchestrator | Friday 04 April 2025 01:59:07 +0000 (0:00:01.326) 0:08:55.603 ********** 2025-04-04 01:59:45.849148 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.849156 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.849164 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.849172 | orchestrator | 2025-04-04 01:59:45.849180 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-04-04 01:59:45.849188 | orchestrator | Friday 04 April 2025 01:59:08 +0000 (0:00:01.029) 0:08:56.632 ********** 2025-04-04 01:59:45.849196 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.849205 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.849213 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.849221 | orchestrator | 2025-04-04 01:59:45.849229 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-04-04 01:59:45.849237 | orchestrator | Friday 04 April 2025 01:59:18 +0000 (0:00:10.177) 0:09:06.810 ********** 2025-04-04 01:59:45.849245 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.849253 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.849261 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.849269 | orchestrator | 2025-04-04 01:59:45.849278 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-04-04 01:59:45.849322 | orchestrator | Friday 04 April 2025 01:59:19 +0000 (0:00:01.331) 0:09:08.142 ********** 2025-04-04 01:59:45.849330 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.849338 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.849346 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.849354 | orchestrator | 2025-04-04 01:59:45.849361 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-04-04 01:59:45.849368 | orchestrator | Friday 04 April 2025 01:59:27 +0000 (0:00:07.595) 0:09:15.738 ********** 2025-04-04 01:59:45.849375 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.849382 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.849389 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.849397 | orchestrator | 2025-04-04 01:59:45.849404 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-04-04 01:59:45.849411 | orchestrator | Friday 04 April 2025 01:59:30 +0000 (0:00:02.737) 0:09:18.475 ********** 2025-04-04 01:59:45.849418 | orchestrator | changed: [testbed-node-0] 2025-04-04 01:59:45.849429 | orchestrator | changed: [testbed-node-1] 2025-04-04 01:59:45.849437 | orchestrator | changed: [testbed-node-2] 2025-04-04 01:59:45.849444 | orchestrator | 2025-04-04 01:59:45.849453 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-04-04 01:59:45.849466 | orchestrator | Friday 04 April 2025 01:59:36 +0000 (0:00:06.318) 0:09:24.794 ********** 2025-04-04 01:59:45.849473 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.849480 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.849487 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.849494 | orchestrator | 2025-04-04 01:59:45.849501 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-04-04 01:59:45.849508 | orchestrator | Friday 04 April 2025 01:59:37 +0000 (0:00:00.781) 0:09:25.576 ********** 2025-04-04 01:59:45.849515 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.849522 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.849529 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.849536 | orchestrator | 2025-04-04 01:59:45.849543 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-04-04 01:59:45.849550 | orchestrator | Friday 04 April 2025 01:59:37 +0000 (0:00:00.737) 0:09:26.313 ********** 2025-04-04 01:59:45.849557 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.849565 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.849572 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.849579 | orchestrator | 2025-04-04 01:59:45.849586 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-04-04 01:59:45.849593 | orchestrator | Friday 04 April 2025 01:59:38 +0000 (0:00:00.439) 0:09:26.752 ********** 2025-04-04 01:59:45.849600 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.849607 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.849614 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.849621 | orchestrator | 2025-04-04 01:59:45.849628 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-04-04 01:59:45.849635 | orchestrator | Friday 04 April 2025 01:59:39 +0000 (0:00:00.813) 0:09:27.566 ********** 2025-04-04 01:59:45.849642 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.849649 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.849656 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.849663 | orchestrator | 2025-04-04 01:59:45.849670 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-04-04 01:59:45.849677 | orchestrator | Friday 04 April 2025 01:59:39 +0000 (0:00:00.828) 0:09:28.394 ********** 2025-04-04 01:59:45.849684 | orchestrator | skipping: [testbed-node-0] 2025-04-04 01:59:45.849691 | orchestrator | skipping: [testbed-node-1] 2025-04-04 01:59:45.849698 | orchestrator | skipping: [testbed-node-2] 2025-04-04 01:59:45.849705 | orchestrator | 2025-04-04 01:59:45.849712 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-04-04 01:59:45.849719 | orchestrator | Friday 04 April 2025 01:59:40 +0000 (0:00:00.493) 0:09:28.888 ********** 2025-04-04 01:59:45.849727 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.849734 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.849741 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.849748 | orchestrator | 2025-04-04 01:59:45.849755 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-04-04 01:59:45.849762 | orchestrator | Friday 04 April 2025 01:59:42 +0000 (0:00:01.622) 0:09:30.511 ********** 2025-04-04 01:59:45.849769 | orchestrator | ok: [testbed-node-0] 2025-04-04 01:59:45.849776 | orchestrator | ok: [testbed-node-2] 2025-04-04 01:59:45.849783 | orchestrator | ok: [testbed-node-1] 2025-04-04 01:59:45.849793 | orchestrator | 2025-04-04 01:59:45.849800 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-04 01:59:45.849808 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-04 01:59:45.849815 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-04 01:59:45.849826 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-04 01:59:45.849833 | orchestrator | 2025-04-04 01:59:45.849840 | orchestrator | 2025-04-04 01:59:45.849847 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-04 01:59:45.849854 | orchestrator | Friday 04 April 2025 01:59:43 +0000 (0:00:01.457) 0:09:31.969 ********** 2025-04-04 01:59:45.849861 | orchestrator | =============================================================================== 2025-04-04 01:59:45.849868 | orchestrator | haproxy-config : Copying over skyline haproxy config ------------------- 11.49s 2025-04-04 01:59:45.849875 | orchestrator | haproxy-config : Copying over heat haproxy config ---------------------- 10.85s 2025-04-04 01:59:45.849882 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.18s 2025-04-04 01:59:45.849889 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 9.87s 2025-04-04 01:59:45.849897 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 8.69s 2025-04-04 01:59:45.849904 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 8.40s 2025-04-04 01:59:45.849911 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 7.81s 2025-04-04 01:59:45.849918 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 7.60s 2025-04-04 01:59:45.849925 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 7.47s 2025-04-04 01:59:45.849932 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 7.39s 2025-04-04 01:59:45.849939 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 7.01s 2025-04-04 01:59:45.849946 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 6.94s 2025-04-04 01:59:45.849953 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.37s 2025-04-04 01:59:45.849960 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 6.32s 2025-04-04 01:59:45.849973 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 6.30s 2025-04-04 01:59:48.875838 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 6.24s 2025-04-04 01:59:48.875950 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 6.19s 2025-04-04 01:59:48.875967 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 6.12s 2025-04-04 01:59:48.875982 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 6.05s 2025-04-04 01:59:48.875996 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.96s 2025-04-04 01:59:48.876012 | orchestrator | 2025-04-04 01:59:45 | INFO  | Task 484ed2b0-b53d-4196-8611-26a176eb7d67 is in state STARTED 2025-04-04 01:59:48.876044 | orchestrator | 2025-04-04 01:59:45 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:48.876059 | orchestrator | 2025-04-04 01:59:45 | INFO  | Task 11ce58f8-f19c-4d6f-b98b-10e7ce6b8d96 is in state STARTED 2025-04-04 01:59:48.876073 | orchestrator | 2025-04-04 01:59:45 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:48.876104 | orchestrator | 2025-04-04 01:59:48 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:48.876794 | orchestrator | 2025-04-04 01:59:48 | INFO  | Task 484ed2b0-b53d-4196-8611-26a176eb7d67 is in state STARTED 2025-04-04 01:59:48.877684 | orchestrator | 2025-04-04 01:59:48 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:48.878929 | orchestrator | 2025-04-04 01:59:48 | INFO  | Task 11ce58f8-f19c-4d6f-b98b-10e7ce6b8d96 is in state STARTED 2025-04-04 01:59:48.880391 | orchestrator | 2025-04-04 01:59:48 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:51.937098 | orchestrator | 2025-04-04 01:59:51 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:51.937850 | orchestrator | 2025-04-04 01:59:51 | INFO  | Task 484ed2b0-b53d-4196-8611-26a176eb7d67 is in state STARTED 2025-04-04 01:59:51.938840 | orchestrator | 2025-04-04 01:59:51 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:51.941092 | orchestrator | 2025-04-04 01:59:51 | INFO  | Task 11ce58f8-f19c-4d6f-b98b-10e7ce6b8d96 is in state STARTED 2025-04-04 01:59:55.004603 | orchestrator | 2025-04-04 01:59:51 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:55.004755 | orchestrator | 2025-04-04 01:59:55 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:55.006226 | orchestrator | 2025-04-04 01:59:55 | INFO  | Task 484ed2b0-b53d-4196-8611-26a176eb7d67 is in state STARTED 2025-04-04 01:59:55.006260 | orchestrator | 2025-04-04 01:59:55 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:55.007838 | orchestrator | 2025-04-04 01:59:55 | INFO  | Task 11ce58f8-f19c-4d6f-b98b-10e7ce6b8d96 is in state STARTED 2025-04-04 01:59:55.008479 | orchestrator | 2025-04-04 01:59:55 | INFO  | Wait 1 second(s) until the next check 2025-04-04 01:59:58.066399 | orchestrator | 2025-04-04 01:59:58 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 01:59:58.066923 | orchestrator | 2025-04-04 01:59:58 | INFO  | Task 484ed2b0-b53d-4196-8611-26a176eb7d67 is in state STARTED 2025-04-04 01:59:58.071162 | orchestrator | 2025-04-04 01:59:58 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 01:59:58.072338 | orchestrator | 2025-04-04 01:59:58 | INFO  | Task 11ce58f8-f19c-4d6f-b98b-10e7ce6b8d96 is in state STARTED 2025-04-04 01:59:58.072667 | orchestrator | 2025-04-04 01:59:58 | INFO  | Wait 1 second(s) until the next check 2025-04-04 02:00:01.124386 | orchestrator | 2025-04-04 02:00:01 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 02:00:01.125226 | orchestrator | 2025-04-04 02:00:01 | INFO  | Task 484ed2b0-b53d-4196-8611-26a176eb7d67 is in state STARTED 2025-04-04 02:00:01.126968 | orchestrator | 2025-04-04 02:00:01 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 02:00:01.129726 | orchestrator | 2025-04-04 02:00:01 | INFO  | Task 11ce58f8-f19c-4d6f-b98b-10e7ce6b8d96 is in state STARTED 2025-04-04 02:00:04.185985 | orchestrator | 2025-04-04 02:00:01 | INFO  | Wait 1 second(s) until the next check 2025-04-04 02:00:04.186173 | orchestrator | 2025-04-04 02:00:04 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 02:00:04.189957 | orchestrator | 2025-04-04 02:00:04 | INFO  | Task 484ed2b0-b53d-4196-8611-26a176eb7d67 is in state STARTED 2025-04-04 02:00:04.190113 | orchestrator | 2025-04-04 02:00:04 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 02:00:04.197040 | orchestrator | 2025-04-04 02:00:04 | INFO  | Task 11ce58f8-f19c-4d6f-b98b-10e7ce6b8d96 is in state STARTED 2025-04-04 02:00:07.248321 | orchestrator | 2025-04-04 02:00:04 | INFO  | Wait 1 second(s) until the next check 2025-04-04 02:00:07.248461 | orchestrator | 2025-04-04 02:00:07 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 02:00:07.249057 | orchestrator | 2025-04-04 02:00:07 | INFO  | Task 484ed2b0-b53d-4196-8611-26a176eb7d67 is in state STARTED 2025-04-04 02:00:07.249077 | orchestrator | 2025-04-04 02:00:07 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 02:00:07.249890 | orchestrator | 2025-04-04 02:00:07 | INFO  | Task 11ce58f8-f19c-4d6f-b98b-10e7ce6b8d96 is in state STARTED 2025-04-04 02:00:10.318369 | orchestrator | 2025-04-04 02:00:07 | INFO  | Wait 1 second(s) until the next check 2025-04-04 02:00:10.318510 | orchestrator | 2025-04-04 02:00:10 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 02:00:10.318872 | orchestrator | 2025-04-04 02:00:10 | INFO  | Task 484ed2b0-b53d-4196-8611-26a176eb7d67 is in state STARTED 2025-04-04 02:00:10.325425 | orchestrator | 2025-04-04 02:00:10 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 02:00:10.327236 | orchestrator | 2025-04-04 02:00:10 | INFO  | Task 11ce58f8-f19c-4d6f-b98b-10e7ce6b8d96 is in state STARTED 2025-04-04 02:00:13.374758 | orchestrator | 2025-04-04 02:00:10 | INFO  | Wait 1 second(s) until the next check 2025-04-04 02:00:13.374904 | orchestrator | 2025-04-04 02:00:13 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 02:00:13.375440 | orchestrator | 2025-04-04 02:00:13 | INFO  | Task 484ed2b0-b53d-4196-8611-26a176eb7d67 is in state STARTED 2025-04-04 02:00:13.376250 | orchestrator | 2025-04-04 02:00:13 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 02:00:13.377114 | orchestrator | 2025-04-04 02:00:13 | INFO  | Task 11ce58f8-f19c-4d6f-b98b-10e7ce6b8d96 is in state STARTED 2025-04-04 02:00:13.377352 | orchestrator | 2025-04-04 02:00:13 | INFO  | Wait 1 second(s) until the next check 2025-04-04 02:00:16.440006 | orchestrator | 2025-04-04 02:00:16 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 02:00:16.440759 | orchestrator | 2025-04-04 02:00:16 | INFO  | Task 484ed2b0-b53d-4196-8611-26a176eb7d67 is in state STARTED 2025-04-04 02:00:16.440975 | orchestrator | 2025-04-04 02:00:16 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 02:00:16.441004 | orchestrator | 2025-04-04 02:00:16 | INFO  | Task 11ce58f8-f19c-4d6f-b98b-10e7ce6b8d96 is in state STARTED 2025-04-04 02:00:19.503087 | orchestrator | 2025-04-04 02:00:16 | INFO  | Wait 1 second(s) until the next check 2025-04-04 02:00:19.503234 | orchestrator | 2025-04-04 02:00:19 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 02:00:19.503544 | orchestrator | 2025-04-04 02:00:19 | INFO  | Task 484ed2b0-b53d-4196-8611-26a176eb7d67 is in state STARTED 2025-04-04 02:00:19.504048 | orchestrator | 2025-04-04 02:00:19 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 02:00:19.505188 | orchestrator | 2025-04-04 02:00:19 | INFO  | Task 11ce58f8-f19c-4d6f-b98b-10e7ce6b8d96 is in state STARTED 2025-04-04 02:00:22.562502 | orchestrator | 2025-04-04 02:00:19 | INFO  | Wait 1 second(s) until the next check 2025-04-04 02:00:22.562638 | orchestrator | 2025-04-04 02:00:22 | INFO  | Task fd1c07e6-f0db-4865-a8a3-41f95612fdf3 is in state STARTED 2025-04-04 02:00:22.565757 | orchestrator | 2025-04-04 02:00:22 | INFO  | Task 484ed2b0-b53d-4196-8611-26a176eb7d67 is in state STARTED 2025-04-04 02:00:22.565792 | orchestrator | 2025-04-04 02:00:22 | INFO  | Task 454bc2f6-9d45-451f-8ce1-d6a4331b55a5 is in state STARTED 2025-04-04 02:00:22.568333 | orchestrator | 2025-04-04 02:00:22 | INFO  | Task 11ce58f8-f19c-4d6f-b98b-10e7ce6b8d96 is in state STARTED 2025-04-04 02:00:25.366502 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-04-04 02:00:25.372377 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-04 02:00:26.073701 | 2025-04-04 02:00:26.073860 | PLAY [Post output play] 2025-04-04 02:00:26.102802 | 2025-04-04 02:00:26.102934 | LOOP [stage-output : Register sources] 2025-04-04 02:00:26.188501 | 2025-04-04 02:00:26.188769 | TASK [stage-output : Check sudo] 2025-04-04 02:00:26.915837 | orchestrator | sudo: a password is required 2025-04-04 02:00:27.231607 | orchestrator | ok: Runtime: 0:00:00.014763 2025-04-04 02:00:27.247601 | 2025-04-04 02:00:27.247739 | LOOP [stage-output : Set source and destination for files and folders] 2025-04-04 02:00:27.287903 | 2025-04-04 02:00:27.288111 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-04-04 02:00:27.382215 | orchestrator | ok 2025-04-04 02:00:27.393290 | 2025-04-04 02:00:27.393408 | LOOP [stage-output : Ensure target folders exist] 2025-04-04 02:00:27.846862 | orchestrator | ok: "docs" 2025-04-04 02:00:27.847215 | 2025-04-04 02:00:28.085457 | orchestrator | ok: "artifacts" 2025-04-04 02:00:28.311667 | orchestrator | ok: "logs" 2025-04-04 02:00:28.331173 | 2025-04-04 02:00:28.331364 | LOOP [stage-output : Copy files and folders to staging folder] 2025-04-04 02:00:28.376651 | 2025-04-04 02:00:28.376903 | TASK [stage-output : Make all log files readable] 2025-04-04 02:00:28.652443 | orchestrator | ok 2025-04-04 02:00:28.661970 | 2025-04-04 02:00:28.662077 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-04-04 02:00:28.707322 | orchestrator | skipping: Conditional result was False 2025-04-04 02:00:28.722609 | 2025-04-04 02:00:28.722727 | TASK [stage-output : Discover log files for compression] 2025-04-04 02:00:28.747597 | orchestrator | skipping: Conditional result was False 2025-04-04 02:00:28.761422 | 2025-04-04 02:00:28.761540 | LOOP [stage-output : Archive everything from logs] 2025-04-04 02:00:28.834054 | 2025-04-04 02:00:28.834193 | PLAY [Post cleanup play] 2025-04-04 02:00:28.857507 | 2025-04-04 02:00:28.857610 | TASK [Set cloud fact (Zuul deployment)] 2025-04-04 02:00:28.923244 | orchestrator | ok 2025-04-04 02:00:28.933996 | 2025-04-04 02:00:28.934097 | TASK [Set cloud fact (local deployment)] 2025-04-04 02:00:28.968464 | orchestrator | skipping: Conditional result was False 2025-04-04 02:00:28.981002 | 2025-04-04 02:00:28.981118 | TASK [Clean the cloud environment] 2025-04-04 02:00:29.598222 | orchestrator | 2025-04-04 02:00:29 - clean up servers 2025-04-04 02:00:30.476734 | orchestrator | 2025-04-04 02:00:30 - testbed-manager 2025-04-04 02:00:30.565571 | orchestrator | 2025-04-04 02:00:30 - testbed-node-3 2025-04-04 02:00:30.658917 | orchestrator | 2025-04-04 02:00:30 - testbed-node-5 2025-04-04 02:00:30.746498 | orchestrator | 2025-04-04 02:00:30 - testbed-node-0 2025-04-04 02:00:30.838960 | orchestrator | 2025-04-04 02:00:30 - testbed-node-4 2025-04-04 02:00:30.933985 | orchestrator | 2025-04-04 02:00:30 - testbed-node-2 2025-04-04 02:00:31.034308 | orchestrator | 2025-04-04 02:00:31 - testbed-node-1 2025-04-04 02:00:31.132174 | orchestrator | 2025-04-04 02:00:31 - clean up keypairs 2025-04-04 02:00:31.150271 | orchestrator | 2025-04-04 02:00:31 - testbed 2025-04-04 02:00:31.175195 | orchestrator | 2025-04-04 02:00:31 - wait for servers to be gone 2025-04-04 02:00:46.710456 | orchestrator | 2025-04-04 02:00:46 - clean up ports 2025-04-04 02:00:46.938056 | orchestrator | 2025-04-04 02:00:46 - 38d30e5b-650e-4598-b5fe-96e9570c9a66 2025-04-04 02:00:47.155502 | orchestrator | 2025-04-04 02:00:47 - 711e421e-789b-40f4-b094-1da6812c8afe 2025-04-04 02:00:47.434361 | orchestrator | 2025-04-04 02:00:47 - ab132a81-d8b0-49b5-b95b-2d4094468ee3 2025-04-04 02:00:47.674686 | orchestrator | 2025-04-04 02:00:47 - b56fb027-96ef-4285-baf2-cd528873fb18 2025-04-04 02:00:47.884647 | orchestrator | 2025-04-04 02:00:47 - cc867b26-532c-4628-b775-e30318730bfc 2025-04-04 02:00:48.068503 | orchestrator | 2025-04-04 02:00:48 - cf5efb77-bf08-40d5-931b-059849d8f8d0 2025-04-04 02:00:48.286119 | orchestrator | 2025-04-04 02:00:48 - d92b05ad-ef13-48d4-99c8-9fe65b6d37e4 2025-04-04 02:00:48.639927 | orchestrator | 2025-04-04 02:00:48 - clean up volumes 2025-04-04 02:00:48.788921 | orchestrator | 2025-04-04 02:00:48 - testbed-volume-2-node-base 2025-04-04 02:00:48.828885 | orchestrator | 2025-04-04 02:00:48 - testbed-volume-manager-base 2025-04-04 02:00:48.872863 | orchestrator | 2025-04-04 02:00:48 - testbed-volume-5-node-base 2025-04-04 02:00:48.915120 | orchestrator | 2025-04-04 02:00:48 - testbed-volume-4-node-base 2025-04-04 02:00:48.953737 | orchestrator | 2025-04-04 02:00:48 - testbed-volume-0-node-base 2025-04-04 02:00:48.994737 | orchestrator | 2025-04-04 02:00:48 - testbed-volume-3-node-base 2025-04-04 02:00:49.035489 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-1-node-base 2025-04-04 02:00:49.075558 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-16-node-4 2025-04-04 02:00:49.115242 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-10-node-4 2025-04-04 02:00:49.156441 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-11-node-5 2025-04-04 02:00:49.199201 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-1-node-1 2025-04-04 02:00:49.237326 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-14-node-2 2025-04-04 02:00:49.284154 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-9-node-3 2025-04-04 02:00:49.328372 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-7-node-1 2025-04-04 02:00:49.366071 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-4-node-4 2025-04-04 02:00:49.407495 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-17-node-5 2025-04-04 02:00:49.447360 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-5-node-5 2025-04-04 02:00:49.488625 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-3-node-3 2025-04-04 02:00:49.531223 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-6-node-0 2025-04-04 02:00:49.572168 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-15-node-3 2025-04-04 02:00:49.615689 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-12-node-0 2025-04-04 02:00:49.657593 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-8-node-2 2025-04-04 02:00:49.696383 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-0-node-0 2025-04-04 02:00:49.740435 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-13-node-1 2025-04-04 02:00:49.782164 | orchestrator | 2025-04-04 02:00:49 - testbed-volume-2-node-2 2025-04-04 02:00:49.818932 | orchestrator | 2025-04-04 02:00:49 - disconnect routers 2025-04-04 02:00:49.906830 | orchestrator | 2025-04-04 02:00:49 - testbed 2025-04-04 02:00:50.825015 | orchestrator | 2025-04-04 02:00:50 - clean up subnets 2025-04-04 02:00:50.858838 | orchestrator | 2025-04-04 02:00:50 - subnet-testbed-management 2025-04-04 02:00:50.996565 | orchestrator | 2025-04-04 02:00:50 - clean up networks 2025-04-04 02:00:51.158064 | orchestrator | 2025-04-04 02:00:51 - net-testbed-management 2025-04-04 02:00:51.410908 | orchestrator | 2025-04-04 02:00:51 - clean up security groups 2025-04-04 02:00:51.444848 | orchestrator | 2025-04-04 02:00:51 - testbed-node 2025-04-04 02:00:51.541686 | orchestrator | 2025-04-04 02:00:51 - testbed-management 2025-04-04 02:00:51.635566 | orchestrator | 2025-04-04 02:00:51 - clean up floating ips 2025-04-04 02:00:51.670545 | orchestrator | 2025-04-04 02:00:51 - 81.163.192.77 2025-04-04 02:00:52.168900 | orchestrator | 2025-04-04 02:00:52 - clean up routers 2025-04-04 02:00:52.253454 | orchestrator | 2025-04-04 02:00:52 - testbed 2025-04-04 02:00:53.038117 | orchestrator | changed 2025-04-04 02:00:53.074460 | 2025-04-04 02:00:53.074640 | PLAY RECAP 2025-04-04 02:00:53.074695 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-04-04 02:00:53.074722 | 2025-04-04 02:00:53.195572 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-04 02:00:53.201618 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-04 02:00:53.858183 | 2025-04-04 02:00:53.858356 | PLAY [Base post-fetch] 2025-04-04 02:00:53.887347 | 2025-04-04 02:00:53.887476 | TASK [fetch-output : Set log path for multiple nodes] 2025-04-04 02:00:53.954003 | orchestrator | skipping: Conditional result was False 2025-04-04 02:00:53.962138 | 2025-04-04 02:00:53.962306 | TASK [fetch-output : Set log path for single node] 2025-04-04 02:00:54.017205 | orchestrator | ok 2025-04-04 02:00:54.025648 | 2025-04-04 02:00:54.025764 | LOOP [fetch-output : Ensure local output dirs] 2025-04-04 02:00:54.492749 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/404de2cc1cd54369b8b7ac59e13b3105/work/logs" 2025-04-04 02:00:54.741599 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/404de2cc1cd54369b8b7ac59e13b3105/work/artifacts" 2025-04-04 02:00:54.995341 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/404de2cc1cd54369b8b7ac59e13b3105/work/docs" 2025-04-04 02:00:55.020129 | 2025-04-04 02:00:55.020376 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-04-04 02:00:55.790993 | orchestrator | changed: .d..t...... ./ 2025-04-04 02:00:55.791315 | orchestrator | changed: All items complete 2025-04-04 02:00:55.791554 | 2025-04-04 02:00:56.406796 | orchestrator | changed: .d..t...... ./ 2025-04-04 02:00:56.938500 | orchestrator | changed: .d..t...... ./ 2025-04-04 02:00:56.974457 | 2025-04-04 02:00:56.974600 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-04-04 02:00:57.019203 | orchestrator | skipping: Conditional result was False 2025-04-04 02:00:57.027839 | orchestrator | skipping: Conditional result was False 2025-04-04 02:00:57.071037 | 2025-04-04 02:00:57.071594 | PLAY RECAP 2025-04-04 02:00:57.071682 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-04-04 02:00:57.071711 | 2025-04-04 02:00:57.187797 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-04 02:00:57.191070 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-04 02:00:57.875021 | 2025-04-04 02:00:57.875173 | PLAY [Base post] 2025-04-04 02:00:57.904612 | 2025-04-04 02:00:57.904742 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-04-04 02:00:59.003506 | orchestrator | changed 2025-04-04 02:00:59.059405 | 2025-04-04 02:00:59.059704 | PLAY RECAP 2025-04-04 02:00:59.059814 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-04-04 02:00:59.059883 | 2025-04-04 02:00:59.218282 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-04 02:00:59.221462 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-04-04 02:00:59.955835 | 2025-04-04 02:00:59.955985 | PLAY [Base post-logs] 2025-04-04 02:00:59.973603 | 2025-04-04 02:00:59.973733 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-04-04 02:01:00.422367 | localhost | changed 2025-04-04 02:01:00.427497 | 2025-04-04 02:01:00.427647 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-04-04 02:01:00.457174 | localhost | ok 2025-04-04 02:01:00.465119 | 2025-04-04 02:01:00.465298 | TASK [Set zuul-log-path fact] 2025-04-04 02:01:00.484611 | localhost | ok 2025-04-04 02:01:00.497910 | 2025-04-04 02:01:00.498030 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-04 02:01:00.526340 | localhost | ok 2025-04-04 02:01:00.533749 | 2025-04-04 02:01:00.533860 | TASK [upload-logs : Create log directories] 2025-04-04 02:01:01.034756 | localhost | changed 2025-04-04 02:01:01.039334 | 2025-04-04 02:01:01.039452 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-04-04 02:01:01.564013 | localhost -> localhost | ok: Runtime: 0:00:00.007045 2025-04-04 02:01:01.575631 | 2025-04-04 02:01:01.575792 | TASK [upload-logs : Upload logs to log server] 2025-04-04 02:01:02.150897 | localhost | Output suppressed because no_log was given 2025-04-04 02:01:02.156697 | 2025-04-04 02:01:02.156861 | LOOP [upload-logs : Compress console log and json output] 2025-04-04 02:01:02.230319 | localhost | skipping: Conditional result was False 2025-04-04 02:01:02.248432 | localhost | skipping: Conditional result was False 2025-04-04 02:01:02.262756 | 2025-04-04 02:01:02.262944 | LOOP [upload-logs : Upload compressed console log and json output] 2025-04-04 02:01:02.323705 | localhost | skipping: Conditional result was False 2025-04-04 02:01:02.324198 | 2025-04-04 02:01:02.336455 | localhost | skipping: Conditional result was False 2025-04-04 02:01:02.348678 | 2025-04-04 02:01:02.348865 | LOOP [upload-logs : Upload console log and json output]